filebrowser provides a file managing interface within a specified directory and it can be used to upload, delete, preview, rename and edit your files. It allows the creation of multiple users and each user can have its own directory. It can be used as a standalone app or as a middleware.
The most powerful Command Line for a CI/CD Platform
cdsctl is the CDS Command Line – you can script everything with it, cdsctl also provide some cool commands such as cdsctl shell to browse your projects and workflows without the need to open a browser.
Trireme, an open-source library curated by Aporeto to provide
cryptographic isolation for cloud-native applications. Trireme-lib is a
library that makes it possible to setup security policies and segment
applications by enforcing end-to-end authentication and authorization
without the need
for complex control planes or IP/port-centric ACLs and east-west
Trireme-lib supports both containers and Linux processes as well user-based activation, and it allows security policy enforcement between any of these entities.
In the Trireme world, a processing unit (PU) end-point can be a
container, Kubernetes POD, or a general Linux process. It can also be a
to a particular server. We will be referring to processing units as PUs
throughout this discussion.
The technology behind Trireme is streamlined, elegant, and simple. It is based on the concepts of Zero-Trust networking:
The identity is the set of attributes and metadata that describes
the container as key/value pairs. Trireme provides an extensible
interface for defining these identities. Users can choose customized
methods appropriate to their environment for establishing PU identity.
For example, in a Kubernetes environment, the identity can be the set of
labels identifying a POD.
There is an authorization policy that defines when PUs with
different types of identity attributes can interact or exchange traffic.
The authorization policy implements an Attribute-Based Access Control
(ABAC) mechanism (https://en.wikipedia.org/wiki/Attribute-Based_Access_Control), where the policy describes relationships between identity attributes.
Every communication between two PUs is controlled through a
cryptographic end-to-end authentication and authorization step, by
overlaying an authorization function over the TCP negotiation. The
authorization steps are performed during the SYN/SYNACK/ACK negotiation.
The result of this approach is the decoupling of network security
from the underlying network infrastructure because this approach is
centered on workload identity attributes and interactions between
workloads. Network security can be achieved simply by managing
application identity and authorization policy. Segmentation granularity
can be adjusted based on the needs of the platform.
Trireme is a node-centric library. Each node participating in the
Trireme cluster must spawn one instance of a process that uses this
library to transparently insert the authentication and authorization
step. Trireme provides the data path functions but does not implement
either the identity management or the policy resolution function.
Function implementation depends on the particular operational
environment. Users have to provide PolicyLogic (ABAC “rules”) to Trireme
for well-defined PUs, such as containers.
This tool is a work-in-progress. Please use liberally (with discretion) and report any bugs!
Checkup can be customized to check up on any of your sites or
services at any time, from any infrastructure, using any storage
provider of your choice (assuming an integration exists for your storage
provider). The status page can be customized to your liking since you
can do your checks however you want. The status page is also
Checkup currently supports these checkers:
Checkup implements these storage providers:
Local file system
SQL (sqlite3 or PostgreSQL)
Checkup can even send notifications through your service of choice (if an integration exists).
How it Works
There are 3 components:
Storage. You set up storage space for the results of the checks.
Checks. You run checks on whatever endpoints you have as often as you want.
Status Page. You host the status page. Caddy makes this super easy. The status page downloads recent check files from storage and renders the results client-side.
OpenStorage is an API abstraction layer providing support for multiple public APIs, including the OpenStorage SDK, CSI, and the Docker Volume API. Developers using OpenStorage for their storage systems can expect it to work seamlessly with any of the supported public APIs. These implementations provide users with the ability to run statefule services in Linux containers on multiple hosts.
OpenStorage makes it simple for developers to write a single implementation which supports many methods of control:
Open Storage, implemented in Golang
Not only does OpenStorage allow storage developers to integrated their storage system with container orchestrations systems,
but also enables applications developers to use the OpenStorage SDK to manage and expose the latest storage features to their
Supported Control APIs
Container Storage Interface
is the standard way for a container orchestrator such as Kubernetes or
Mesosphere to communicate with a storage provider. OSD provides a CSI
implementation to provision storage volumes to a container on behalf of
any third party OSD driver and ensures the volumes are available in a
multi host environment.
OSD integrates with Docker Volumes
and provisions storage to a container on behalf of any third party OSD
driver and ensures the volumes are available in a multi host
CSI and Docker Volumes API provide a very generic storage control model, but with the OpenStorage SDK,
applications can take control and utilize the latest features of a
storage system. For example, with the OpenStorage SDK, applications can
control their volumes backups, schedules, etc.
inlets combines a reverse proxy and websocket tunnels to expose your
internal and development endpoints to the public Internet via an
exit-node. An exit-node may be a 5-10 USD VPS or any other computer with
an IPv4 IP address.
Why do we need this project? Similar tools such as ngrok or Argo Tunnel from Cloudflare
are closed-source, have limits built-in, can work out expensive and
have limited support for arm/arm64. Ngrok is also often banned by
corporate firewall policies meaning it can be unusable. Other
open-source tunnel tools are designed to only set up a static tunnel.
inlets aims to dynamically bind and discover your local services to DNS
entries with automated TLS certificates to a public IP address over its
When combined with SSL – inlets can be used with any corporate HTTP proxy which supports CONNECT.
Mainflux is modern, scalable, secure open source and patent-free IoT cloud platform written in Go.
It accepts user, device, and application connections over various network protocols (i.e. HTTP,
MQTT, WebSocket, CoAP), thus making a seamless bridge between them. It is used as the IoT middleware
for building complex IoT solutions.
Multi-protocol connectivity and protocol bridging (HTTP, MQTT, WebSocket and CoAP)
Crossplane is an open source Golang written multi-cloud control plane. It introduces workload and resource abstractions on-top of existing managed services that enables a high degree of workload portability across cloud providers. A single cross-plane enables the provisioning and full-life-cycle management of services and infrastructure across a wide range of providers, offerings, vendors, regions, and clusters. Crossplane offers a universal API for cloud computing, a workload scheduler, and a set of smart controllers that can automate work across clouds.
Crossplane presents a declarative management style API that covers a wide range of portable abstractions including databases, message queues, buckets, data pipelines, serverless, clusters, and many more coming. It’s based on the declarative resource model of the popular Golang based Kubernetes project, and applies many of the lessons learned in container orchestration to multicloud workload and resource orchestration.
Crossplane supports a clean separation of concerns between developers and administrators. Developers define workloads without having to worry about implementation details, environment constraints, and policies. Administrators can define environment specifics, and policies. The separation of concern leads to a higher degree of re-usability and reduces complexity.
Crossplane includes a workload scheduler that can factor a number of criteria including capabilities, availability, reliability, cost, regions, and performance while deploying workloads and their resources. The scheduler works alongside specialized resource controllers to ensure policies set by administrators are honored.