Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Enable TLS / HTTPS support
These settings are for configuring the TLS / HTTPS support for datarhei Core.
If TLS is enabled, the HTTPS server will listen on this address. The default address is :8181
.
The default :8181
will listen on all interfaces on port 8181. To use a specific interface, write additionally it's IP, e.g. 127.0.0.1:8181
to only listen on the loopback interface.
Set this value to true
in order to enable TLS / HTTPS support. If enabled you have to either provide your own certificate (see cert_file and key_file) or enable automatic certificate from Let's Encrypt (see auto).
If TLS is enabled, a HTTP server listening on address will be additionally started. This server provides access to everything as the HTTPS server, additionally it will allow ACME http-1 challenges in case Let's Encrypt (auto) certificates are enabled.
By default this is set to false
.
Enable automatic certificate generation from Let's Encrypt. This only works if enable
is set to true
and at least one public hostname is defined in host.name. All listed hostnames will be included in the certificate. All listed public hostnames is required to point to the host where datarhei Core is running on.
In order for Let's Encrypt to resolve the http-1 challenge, the HTTP server of the datarhei Core must be reachable on port 80. Either by setting address to :80
or by forwarding/mapping port 80 to the actual port the HTTP server is listening on.
The obtained certificates will be stored in the /cert
subdirectory of db.dir to be available after a restart.
Any provided paths in cert_file
and key_file
will be ignored.
By default this is set to false
.
An email address that is required for Let's Encrypt in order to receive a certificate.
By default the email address cert@datarhei.com
is used.
If you bring your own certificate, provide the path to the certificate file in PEM format.
By default this is not set.
If you bring your own certificate, provide the path to the key file in PEM format
By default this is not set.
If you want to use automatic certificates from Let's Encrypt, set tls.enable and tls.auto to true
. and host.name has to be set to the domain name this host will be reachable. Otherwise the ACME http-1 challenge will not work.
To create a self-signed certificate and key file pair, run this command and provide a reasonable value for the Common Name (CN). The CN is the fully qualified name of the host the instance is running on (e.g., localhost
). You can also use an IP address or a wildcard name, e.g., *.example.com
.
RSA SSL certificate
ECDSA SSL certificate
Call openssl ecparam -list_curves
to see all available supported curves listed.
Settings for the host datarhei Core is running on.
A list of public host/domain names or IPs this host is reacheable under. For the ENV use a comma separated list of public host/domain names or IPs.
The default is an empty list.
Enable detection of public IP addresses in case the list of names is empty.
By default this is set to true
.
You have to provide the location of the config file by setting the environment variable CORE_CONFIGFILE
to path to the config file. Example:
The config file is written in JSON format.
If the config file doesn't exist yet, it will be created and its fields will be filled with their default values.
If the config file is partially complete or of an older version, it will be upgraded and the missing fields will be filled with their default values.
If you don't provide the CORE_CONFIGFILE
environment variable, the default config values will be used and the configuration will not be persisted to the disk.
As of version 16.12.0:
If no path is given in the environment variable CORE_CONFIGFILE, different standard locations will be probed:
os.UserConfigDir() + /datarhei-core/config.js
os.UserHomeDir() + /.config/datarhei-core/config.js
./config/config.js
If the config.js doesn't exist in any of these locations, it will be assumed at ./config/config.js
A minimal valid config file must contain at least the config version:
Configuration values can be changed by either editing the config file directly, or via the JSON API (API for short) or via environment variables (ENV for short). All environment variables have the prefix CORE_
followed by the JSON names in uppercase. Example:
Following, every field of the configuration file will be described in detail:
ID of the Core. If not set, a UUIDv4 will be generated. Default: unset
Human-readable name of the Core. If not set a custom name will be generated. Default: unset
HTTP listening address.
Default: :8080
The default :8080
will listen on all interfaces on port 8080. To use a specific interface, write additionally it's IP, e.g. 127.0.0.1:8080
to only listen on the loopback interface.
Log settings.
Database (processes, metadata, ...) endpoint.
Configuration to detect or set the host-/domainname.
API Security options.
TLS/HTTPS settings (also required for RTMPS).
General configuration, DiskFS, MemFS, and S3.
RTMP server for publishing and playing streams.
SRT server for publishing and playing streams.
General FFmpeg settings.
HLS-/MPEG-DASH session management and bandwidth limitations.
General metrics settings.
HTTP/S route configuration (e.g., to inject UI's).
Core / Golang debugging options.
All about datarhei Update-Checks and data tracking.
CORE_UPDATE_CHECK=true
CORE_SERVICE_URL=https://service.datarhei.com
Check for updates and send anonymized data (default: false).
Requires service.url
.
IP addresses are anonymized and stored for 30 days on servers in the EU.
URL for the update_check
Service API.
Default: https://service.datarhei.com
About anonymizied data:
We receive: id, os architecture, uptime, process stats (total: running, failed, killed), viewer count
The data is used exclusively for the further development of the products and error detection. Domains/IP addresses, companies, and persons remain anonymous.
Manual version Build for Core v16.11+
The datarhei Core is a process management solution for FFmpeg that offers a range of interfaces for media content, including HTTP, RTMP, SRT, and storage options. It is optimized for use in virtual environments such as Docker. It has been implemented in various contexts, from small-scale applications like Restreamer to large-scale, multi-instance frameworks spanning multiple locations, such as dedicated servers, cloud instances, and single-board computers. The datarhei Core stands out from traditional media servers by emphasizing FFmpeg and its capabilities rather than focusing on media conversion.
__
The objectives of development are:
Unhindered use of FFmpeg processes
Portability of FFmpeg, including management across development and production environments
Scalability of FFmpeg-based applications through the ability to offload processes to additional instances
Streamlining of media product development by focusing on features and design.
Run multiple processes via API
Unrestricted FFmpeg commands in process configuration.
Error detection and recovery (e.g., FFmpeg stalls, dumps)
Referencing for process chaining (pipelines)
Placeholders for storage, RTMP, and SRT usage (automatic credentials management and URL resolution)
Logs (access to current stdout/stderr)
Log history (configurable log history, e.g., for error analysis)
Resource limitation (max. CPU and MEMORY usage per process)
Statistics (like FFmpeg progress per input and output, CPU and MEMORY, state, uptime)
Input verification (like FFprobe)
Metadata (option to store additional information like a title)
Configurable file systems (in-memory, disk-mount, S3)
HTTP/S, RTMP/S, and SRT services, including Let's Encrypt
Bandwidth and session limiting for HLS/MPEG DASH sessions (protects restreams from congestion)
Viewer session API and logging
HTTP REST and GraphQL API
Swagger documentation
Metrics incl. Prometheus support (also detects POSIX and cgroups resources)
Docker images for fast setup of development environments up to the integration of cloud resources
How to start and configure the Core via Docker.
Native installations without Docker are possible, but not supported. Issues are not observed.
Select the CPU architecture and, if desired, the GPU support:
Pi 3 supports MMAL/OMX for 32 Bit (64 Bit is not supported).
Pi4 supports V4LM2M for 32/64 Bit
Hint: raspi-config requires gpu_mem=256 or more.
All default values can be changed and are described on the Configuration page.
Complete example:
${PWD} creates a folder structure in the folder where the command is issued. Please correct this if necessary.
Replace %USERPROFILE% with something like c:/myfolder
$HOST_DIR
can be adjusted without reconfiguring the app. For the $CORE_DIR,
check the configuration instructions.
Directory for holding the config and operational data.
${PWD} creates a folder structure in the folder where the command is issued. Please correct this if necessary.
Directory on disk, exposed on HTTP path “/“.
${PWD} creates a folder structure in the folder where the command is issued. Please correct this if necessary.
$HOST_PORT
can be adjusted without reconfiguring the app. For the $CORE_PORT,
check the configuration instructions.
HTTP listening address.
HTTPS listening address.
RTMP server listen address.
SRT server listen address.
/udp is required for SRT port-mapping.
With --net=host the container is started without network isolation. In this case, port forwarding is not required.
More in the Configuration instructions.
Allow FFmpeg to access GPU's, USB and other devices available in the container.
If seccomp is active and no internal-to-external communication is possible:
To manage the Core container via systemd (systemd is a Linux process daemon.)
Adjust the docker command options according to your setup.
Logging settings for the datarhei Core.
The verbosity of the logging. The datarhei Core is writing the logs to stderr
. Possible values are:
silent
No logging at all.
error
Only errors will be logged.
warn
Warnings and errors will be logged.
info
General information, warnings, and errors will be logged.
debug
Debug messages and every thing else will be logged. This is very chatty.
The default logging level is info
.
Logging topics allow you to restrict what type of messages will be logged. This is practical if you enable debug logging and want to see only the logs you're interested in. An empty list of topics means that all topics will be logged.
An non-exhaustive list of logging topics:
cleanup
config
core
diskfs
http
httpcache
https
let's encrypt
memfs
process
processstore
rtmp
rtmp/s
rtmps
update
service
session
sessionstore
srt
By default all topics are logged.
The log is also kept in memory for retrieval via the API. This value defines how many lines shoul dbe kept in memory.
The default is 1000 lines.
Settings for accessing the available storage types. The storages are accessible via HTTP, mounted to different paths.
Path to a file with the mime-type definitions. This is a file with the MIME types has one MIME type per line followed by a list of file extensions (including the "."). Files served from the storages will have the matching mime-type associated to it.
Example:
Relative paths are interpreted relative to where the datarhei Core binary is executed.
Default: ./mime.types
Define a list of allowed CORS origin for accessing the storages.
By default it contains the only element *
, allowing access from anywhere.
The disk storage is mounted at /
via the HTTP server.
The memory storage is mounted at /memfs
via the HTTP server.
The S3 storage is mounted at the configured path via the HTTP server.
S3 storage is available as of version 16.12.0
The settings for the in-memory filesystem. This filesystem is accessible on /memfs
via HTTP. This filesystem can only be accessed via HTTP. Writing to and deleting from the filesystem can be restricted by HTTP basic auth.
Set this value to true
in order to enable basic auth for PUT
, POST
, and DELETE
operations on /memfs. Read access (GET
, HEAD
) is not restricted. If enabled, you have to define a username and a password.
It is highly recommended to enable basic auth for write operations on /memfs
.
By default this value is set to false
.
Username for Basic-Auth of /memfs. This has to be set if basic auth is enabled.
By default this value is not set, i.e. an empty string.
Password for Basic-Auth of /memfs. This has to be set if basic auth is enabled.
By default this value is not set, i.e. an empty string.
The maximum amount of data that is allowed to be stored in this filesystem. The value is interpreted as megabytes. A 507 Insufficient Storage
will be returned if you hit the limit. Use a value equal to or smaller than 0
to not set any limits. The limit will be the available memory.
By default no limit is set, i.e. a value of 0
.
Whether to automatically remove the oldest files if the filesystem is full.
By default this value is set to false
.
The settings for the disk filesystem. This filesystem is accessible on /
via HTTP. This filesystem can only be accessed for reading via HTTP. Writing to and deleting from the filesystem is possible .
Path to a directory on disk. It will be exposed on /
for reading.
Relative paths are interpreted relative to where the datarhei Core binary is executed.
By default it is set to ./data
.
The maximum amount of data that is allowed to be stored in this filesystem. The value is interpreted as megabytes. A 507 Insufficient Storage
will be returned if you hit the limit. Use a value equal to or smaller than 0
to not impose any limits. Then all the available space on the disk is the limit.
By default no limit is set, i.e. a value of 0
.
Set this value to true
in order to enable the cache for the disk. The cache is an LRU cache, i.e. the least recently accessed files in the cache will be removed in case the cache is full and a new file wants to be added.
By default the value is set to true
.
Limit for the size of the cache in megabytes. A value of 0 doesn't impose any limit. The limit will be the available memory.
By default no limit is set, i.e. a value of 0
.
Number of seconds to keep a file in the cache.
By default this value is set to 300
seconds.
Limit for the size of a file to be allowed to be put in the cache in megabytes.
By default this value is set to 1
megabyte.
A list of file extensions to cache, e.g. [".ts", ".mp4"]
. Leave the list empty in order to cache all files. Use a space-separated list of extensions for the environment variable, e.g. .ts .mp4
.
By default the list is empty.
A list of file extensions not to cache, e.g. [".m3u8", ".mpd"]
. Leave the list empty in order to block no extension. Use a space-separated list of extensions for the environment variable, e.g. .m3u8 .mpd
.
By default the manifest files for HLS and DASH are blocked from caching, i.e. [".m3u8", ".mpd"].
The settings for the S3 filesystem. This filesystem is accessible on the configured path via HTTP. This filesystem can only be accessed via HTTP. Writing to and deleting from the filesystem can be restricted by HTTP basic auth.
Any S3 compatible service can be used, e.g. Amazon, Minio, Backblaze, ...
Available as of version 16.12.0
The name for this storage. The name will be used in placeholders, e.g. {fs:aws}
, and for accessing the filesystem via the api, e.g. /api/v3/fs/aws
. The name memfs
is reserved for the in-memory filesystem.
By default this value is not set, but is required.
The path where the filesystem will be mounted in the HTTP server. It needs to be an absolute path. A mountpoint is required.
By default this value is not set, but is required.
Set this value to true
in order to enable basic auth for PUT
, POST
, and DELETE
operations on the configured mountpoint. Read access (GET
, HEAD
) is not restricted. If enabled, you have to define a username and a password.
It is highly recommended to enable basic auth for write operations on the mountpoint.
By default this value is set to false
.
Username for Basic-Auth of the configured mountpoint. This has to be set if basic auth is enabled.
By default this value is not set, i.e. an empty string.
Password for Basic-Auth of the configured mountpoint. This has to be set if basic auth is enabled.
By default this value is not set, i.e. an empty string.
The endpoint for the S3 storage. For Amazon AWS S3 it would be e.g. s3.amazonaws.com
. Ask your S3 storage provider to the necessary credentials.
By default this value is not set, i.e. an empty string.
Your access key ID for the S3 storage. Ask your S3 storage provider to the necessary credentials.
By default this value is not set, i.e. an empty string.
Your secret access key for the S3 storage. Ask your S3 storage provider to the necessary credentials.
By default this value is not set, i.e. an empty string.
The name of the bucket you want to use. If the bucket does not exist already it will be created.
By default this value is not set, i.e. an empty string.
Identifier for the region the storage is located, e.g. eu
, us-west1
, ... . If your S3 storage provider doesn't support regions, leave this field empty.
By default this value is not set, i.e. an empty string.
Whether to use HTTPS or HTTP. It is strongly recommended to enable this setting.
By default this is set to false
.
Settings for session capturing. Sessions for HLS, RTMP, SRT, HTTP, and FFmpeg are captured.
Set this value to true in order to enable session capturing.
By default this value is set to true
.
List of IP ranges in CIDR notation to ignore for session capturing. If either end of a connection falls into this list of IPs, the session will not be captured. For the environment variable provide a comma-separated list of IP ranges in CIDR notation.
By default this value is set to ["127.0.0.1/32","::1/128"]
.
The timeout in seconds for an idle session. After this timeout the session is considered as closed. Applies only to HTTP and HLS sessions.
By default this value is set to 30
seconds.
Whether to persist the session history. The session history is stored as sessions.json
in db.dir. If the session history is not persisted it will be kept only in memory.
By default the session history is not persisted, i.e. this value is set to false
.
Interval in seconds in which to persist the current session history. This setting has only effect if persisting the session history is enabled.
By default this value is set to 300
seconds.
The maximum allowed outgoing bitrate in mbit/s. If the limit is reached, any new HLS sessions will be rejected. A value of 0 means no limitation of the outgoing bitrate.
By default this is value is set to 0
, i.e. unlimited.
The maximum allowed number of simultaneous sessions. If the limit is reached, any new HLS sessions will be rejected. A value of 0 means no limitation of the number of sessions.
By default this value is set to 0, i.e. unlimited.
A simple RTMP server for publishing and playing streams
The settings for the built-in SRT server. Check out our RTMP guide for more information.
Set this value to true
in order to enable the built-in RTMP server.
By default the RTMP server is disabled.
Set this value to true
to enable the RTMPS server that will run in parallel with the RTMP server on a different port. You have to have tls.enable set to true in order for enabling the RTMPS server because it will use the same certificate as for the HTTPS server.
By default TLS is disabled.
If the RTMP server is enabled, it will listen on this address. The default address is :1935
.
The default :1935
will listen on all interfaces on port 1935. To use a specific interface, write additionally it's IP, e.g. 127.0.0.1:1935
to only listen on the loopback interface.
If the RTMPS server is enabled, it will listen on this address. The default address is :1936
.
The default :1936
will listen on all interfaces on port 1936. To use a specific interface, write additionally it's IP, e.g. 127.0.0.1:1936
to only listen on the loopback interface.
Define the app a stream can be published on, e.g. /live
to require the path in an RTMP URLs to start with /live
.
The default app is /
.
To prevent anybody from publish or playing streams, define token to be a secret only known to the publishers and subscribers. The token has to be put in the query of the stream URL, e.g. /live/stream?token=abc123
.
As of version 16.12.0 the token can be appended to the path instead of a query parameter, e.g. /live/stream/abc123
. With this the token corresponds to a stream key.
By default the token is not set (i.e. an empty string).
Settings for collecting metrics of the core and FFmpeg processes.
Caution with many processes and low values! It will increases CPU and RAM usage.
Enable collecting metrics data of the datarhei Core itself and the FFmpeg processes. The metrics can be queried via the metrics API endpoint.
By default collecting the metrics is disabled.
Enable prometheus endpoint at /metrics
. This requires that collecting metrics is enabled.
By default this is disabled.
Define for how many seconds historic metrics data should be kept.
By default this value is set to 300
.
Define in which interval (in seconds) the metrics should be collected.
By default this value is set to 2
.
These are the settings for securing the API from unwanted access.
Set this value to true
in order to allow only ready access to the API. All API endpoints for writing will not be mounted.
By default this value is set to false
.
A list of IPs that are allowed to access the API via HTTP. Each entry has to be an IP range in CIDR notation, e.g. ["127.0.0.1/32","::1/128"]
. Provide the list as comma-separated values for the environment variable, e.g. 127.0.0.1/32,::1/128
. If the list is empty, then all IPs are allowed. If the list contains any invalid IP range, the server will refuse to start.
By default the list is empty.
A list of IPs that are not allowed to access the API via HTTP. Each entry has to be an IP range in CIDR notation. Provide the list as comma-separated values for the environment variable. If the list is empty, then no IPs will be blocked. If the list contains any invalid IP range, the server will refuse to start.
By default the list is empty.
A list of IPs that are allowed to access the API via HTTPS. Each entry has to be an IP range in CIDR notation. Provide the list as comma-separated values for the environment variable. If the list is empty, then all IPs are allowed. If the list contains any invalid IP range, the server will refuse to start.
By default the list is empty.
A list of IPs that are not allowed to access the API via HTTPS. Each entry has to be an IP range in CIDR notation. Provide the list as comma-separated values for the environment variable. If the list is empty, then no IPs will be blocked. If the list contains any invalid IP range, the server will refuse to start.
By default the list is empty.
Set this value to true
to enable JWT authentication for the API. If it is enabled, you have to provide a username and password. The username and password will be sent to the /api/login
endpoint in order to obtain an access and refresh JWT.
It is strongly recommended to enable authentication for the API in order to prevent access from unwanted parties.
By default this value is set to false
.
Set this value to true
in order to allow unprotected access from localhost.
Be default this value is set to false
.
The username for JWT authentication. If JWT authentication is enabled, a username must be defined.
By default this value is empty, i.e. no username defined.
The password for JWT authentication. If JWT authentication is enabled, a password must be defined.
By default this value is empty, i.e. no password defined.
A secret for signing the JWT. If you leave this value empty, a random secret will be generated for you.
By default this value is empty.
Set this value to true
in order to enable API auth0 protection. With this a valid Auth0 access JWT can be used instead of a username/password in order to obtain the access and refresh JWT. Additionally you have to provide a list of tenants and their users to validate the Auth0 access JWT against.
By default this value is set to false
.
A list of allowed tenants and their users. A tenant is a JSON object:
You can obtain the domain, audience, and clientid from your Auth0 account. You also have to provide a list of allowed users that are member of that tenant.
For providing the list of tenants and their users as an environement variable you have to provide a comma-separated list of base64 encoded tenant JSON objects.
As of version 16.12.0 there's a different syntax available for providing the tenants as environment variable. A list of comma separated URLs of this form:
By default this list is empty.
A simple SRT server for publishing and playing streams
The settings for the built-in SRT server. Check out our for more information.
Set this value to true
in order to enable the built-in SRT server.
By default the SRT server is disabled.
If the SRT server is enabled, it will listen on this address. The default address is :6000
.
The default :6000
will listen on all interfaces on port 6000. To use a specific interface, write additionally it's IP, e.g. 127.0.0.1:6000
to only listen on the loopback interface.
Define a passphrase in order to enable SRT encryption. If the passphrase is set it is required and applies to all connections.
By default the passphrase is not set (i.e. an empty string).
The token is an arbitrary string that needs to be comunicated in the streamid. Only with a valid token it is possible to publish or request streams. If the token is not set, anybody could publish and request streams.
By default the token is not set (i.e. an empty string).
Set this value to true in order to enable logging for the SRT server. This will log events on the SRT protocol level. You have to provide the topics you are interested in, otherwise nothing will be logged.
By default the logging is disabled.
Logging topics allow you to define what type of messages will be logged. This is practical if you want to debug a SRT connection. An empty list of topics means that no topics will be logged.
By default no topics are logged (i.e. an empty array).
Settings for the FFmpeg binary.
Path to the ffmpeg
binary. The system's %PATH will be searched for the ffmpeg binary. You can also provide an absolute or relative path to the binary.
By default this value is set to ffmpeg
.
The maximum number of simultaneously running ffmpeg
instances. Set this value to 0
in order to not impose any limit.
By default this value is set to 0
.
To control where FFmpeg can read from and where FFmpeg can write from, you can define patterns that matches the input addresses or the output addresses. These patterns are regular expressions that can be provided here. For the respective environment variables the expressions need to be space-separated, e.g. https?:// rtsp:// rtmp://
.
It will be rejected if the address is outside the storage.disk.dir
directory. Otherwise, the protocol file:
will be prepended. If you want to explicitely allow or block access to the filesystem, use file:
as pattern in the respective list.
Special cases are the output addresses -
(which will be rewritten to pipe:
), and /dev/null
, which will be allowed even though it's outside of storage.disk.dir
.
List of patterns for allowed inputs.
By default this list is empty, i.e. all inputs are allowed.
List of patterns for disallowed inputs.
By default this list is empty, i.e. no inputs are blocked.
List of patterns for allowed outputs.
By default this list is empty, i.e. all outputs are allowed.
List of patterns for disallowed outputs.
By default this list is empty, i.e. no outputs are blocked.
The number of latest FFmpeg log lines for each process to keep.
By default this value is set to 50
lines.
The number of historic logs for each process to keep.
By default this value is set to 3
.
Find a list of known logging topics on the .
Independently of the values of access.output
there's a check that verifies that output can only be written to the directory specified in and works as follows: if the address has a protocol specifier other than file:
, then no further checks will be applied. If the protocol is file:
or no protocol specifier is given, the address is assumed to be a path that is checked to be inside of storage.disk.dir
.
Settings for static HTTP routes.
List of path prefixes that are not allowed to be overwritten by a static route. If a static route would overwrite one of the blocked prefixes, an error will be thrown at startup. For the environment variable, provide a comma-separated list of prefixes, e.g. /prefix1,/prefix2
.
By default this value is set to ["/api"]
.
A list of static routes. This maps a path to a different path and results in a HTTP redirect, e.g. {"/foo.txt": "/bar.txt"}
will redirect requests from /foo.txt
to /bar.txt
. Path have to start with a /
and they are based on storage.disk.dir on the filesystem.
The special suffix /*
of a route allows you to serve whole directories from another root than storage.disk.dir, e.g. {"/ui/*", "/path/to/ui"}
. If you use a relative path as target, then it will be added to the current working directory.
By default no routes are defined.
A path to a directory holding UI files. This will be mounted as /ui
.
By default this value is not set, i.e. an empty string.
The debugging settings can help to find and solve issues with the datarhei Core.
By setting this to true, the endpoint /profiling will be established where you can access different diagnostic solutions as described in https://go.dev/doc/diagnostics.
By default this setting is set to false
.
Golang is usually quite greedy when it comes to claim memory to itself. This settings lets you define the number of seconds between forcing the garbage collector to run in order return memory to the OS. If this is not set, the runtime will decide on its own when to run the garbage collector.
Alternatively, you can set the environment variable GOMEMLIMIT
to a value in bytes in order to set a soft memory limit. This will influence the garbage colltector when the consumed memory comes close to this limit. If you use the GOMEMLIMIT
environment variable you are advised to leave the force_gc
option disabled.
The default for this setting is 0
(i.e. disabled).
As of version 16.12.0 use this valuess to impose a soft limit to the memory consumption of the Core application itselft (i.e. with out the memory consumption of the ffmpeg processes). This has the same effect as setting the GOMEMLIMIT
environment variable.
The provided value is the number of megabytes the Core application is allowed to consumed.
The default for this setting is 0
(i.e. no limit).
Swagger documentation.
The documentation of the API is available on /api/swagger/index.html
To generate the API documentation from the code, use swag:
After the first command the swagger definition can be found at docs/swagger.json
or docs/swagger.yaml
.
The second command will build the core binary and start it. With the default configuration, the Swagger UI is available at http://localhost:8080/api/swagger/index.html
.
Known interfaces based on the Core.
The datarhei Core includes a simple RTMP server for publishing and playing streams. It is not enabled by default. You have to enable it in the config in the or via the corresponding environemnt variables.
Example:
In the above example /live
is the RTMP app and 12345.stream
is the name of the resource.
In order to protect access to the RTMP server you should define a token in the configuration. Only with a valid token it is possible to publish or play resources. The token has to be appended to the RTMP URL as a query string, e.g. rtmp://127.0.0.1:1935/live/12345.stream?token=abc
.
As of version 16.12.0 you can write the token as last part of the path in URL instead of as a query string. The above example will looke like rtmp://127.0.0.1:1935/live/12345.stream/abc
. This allows you to enter the token into the "stream key" field in some clients.
Via the in the API you can gather a list of the currently publishing RTMP resources.
The datarhei Core includes a simple SRT server for publishing and playing streams. It is not enabled by default. You have to enable it in the config in the or via the corresponding environment variables.
SRT is a modern live streaming protocol with a low latency and network failure tolerance. .
The SRT server supports publishing and requesting streams, similar to an RTMP server. With your SRT client you have to connect to the SRT server always in caller mode and live transmission mode.
Example:
If a passphrase is set in the config (or via environment variable), you have to provide the passphrase in the SRT URL. Example SRT URL with the passphrase foobarfoobar
:
In order to define whether you want to publish or request a resource, you have to provide your intent in the streamid.
The streamid
is formatted as follows:
The resource
is the name of the stream. This can be anything. You can publish only one stream with that same name.
The mode is either request
or publish
. If you don't provide a mode, request
will be assumed. You can only request resources that are currently publishing. You can only publish resources that are not already publishing.
The token
is the one defined in the config (see ). If no token is configured, you can omit the token in the streamid.
Publishing the resource 12345
with the token foobar
:
Publishing the resource 12345 with no token defined in the configuration:
Requesting the resource 12345 with no token defined in the configuration:
Requesting the resource 12345 with the token foobar
:
The whole SRT URL might look like this for the last example:
The datarhei Core provides two filesystem abstractions that you can use in your FFmpeg process configurations.
The disk filesystem is the directory you defined in the configuration at . Any FFmpeg command will be restricted to this directory (or its subdirectories) for file access (read or write). One exception is /dev
in order to access, e.g. USB cameras.
In a process configuration you can use the , such that you don't need to remember and write the configured path.
The contents of the disk filesystem are accessible read-only via the /
path of the datarhei core HTTP server.
In order to access and modify the contents of the filesystem programmatically, you can use the corresponding .
The datarhei Core has built-in memory filesystem. It is enabled by default and it is only accessible via HTTP. Its contents can be accessed via the /memfs
path of the datarhei Core HTTP server.
In the configuration in the section you can define different aspects of the memory file system, such as the maximum size of the filesystem, the maximum file size, password protection, and so on.
In a process configuration you can use the , such that you don't need to remember and write the whole base URL.
The /memfs
path is not read-only. Write access is protected via HTTP BasicAuth.
In order to access and modify the contents of the filesystem programmatically, you can use the corresponding .
The datarhei Core allows you to mount a S3 compatible filesystem. It is only accessible via HTTP. Its contents can be accessed via the configured path of the datarhei Core HTTP server.
The S3 filesystem HTTP mountpoint is not read-only. Write access is protected via HTTP BasicAuth.
Metrics for the processes and other aspects are provided for a Prometheus scraper on /metrics
.
You have to set and to true
in the settings
Currently, available metrics are:
Metric | Type | Dimensions | Description |
---|
Via the in the API you can gather statistics about the currently connected SRT clients.
In the configuration in the section you can define different aspects of the S3 filesystem, such as the login credentials, bucket name, and so on.
In a process configuration you can use the , such that you don't need to remember and write the whole base URL, where [name]
is the configured name of the S3 filesystem, e.g. {fs:aws}
.
In order to access and modify the contents of the filesystem programmatically, you can use the corresponding .
ffmpeg_process | gauge |
| General stats per process. |
ffmpeg_process_io | gauge |
| Stats per input and output of a process. |
mem_limit_bytes | gauge |
| Total available memory in bytes. |
mem_free_bytes | gauge |
| Free memory in bytes. |
net_rx_bytes | gauge |
| Number of received bytes by interface. |
net_tx_bytes | gauge |
| Number of sent bytes by interface. |
cpus_system_time_secs | gauge |
| System time per CPU in seconds. |
cpus_user_time_secs | gauge |
| User time per CPU in seconds. |
cpus_idle_time_secs | gauge |
| Idle time per CPU in seconds. |
session_total | counter |
| Total number of sessions by collector. |
session_active | gauge |
| Current number of active sessions by collector. |
session_rx_bytes | counter |
| Total received bytes by collector. |
session_tx_bytes | counter |
| Total sent bytes by collector. |
Quick introduction to using the core API and FFmpeg processes.
Starting the Core container
Configure and restart the Core
Creating, verifying, updating, and deleting an FFmpeg process
Using placeholders for the in-memory file system and RTMP server
Analyze FFmpeg process errors
Configuring the Core via the API
Restarting the Core via the API and loading the changed configuration
Calling the RTMP API
Initiating the main-process using the Process API
Check the main-process via the Process API
Update the main-process via the Process API
Check the change of the main-process via the Process API
Monitor the RTMP stream via the RTMP API
The configuration creates a process with a virtual video input with no output.
The process is running if the exec state in the response is running and the progress.packet is increasing.
This configuration creates an RTMP stream as output and sends it to the internal RTMP server. It uses the {rtmp} placeholder for ease of integration.
Now you can pull the stream from the RTMP server with the URL rtmp://127.0.0.1:1935/main
.
Initiate the sub-process through the Process API
Check the sub-process through the Process API
Monitor the HLS stream via the file system API
This configuration uses the RTMP stream from the main-process and creates an HLS stream as an output on the internal in-memory file system. It uses the {memfs} placeholder for ease of integration.
Now you can pull the stream from the in-memory filesystem with the URL http://127.0.0.1:8080/memfs/sub.m3u8
.
Stop the main-process through the Process API
Check the sub-process via process API
Analyze the sub-process log report via the Process API
Start the main-process through the Process API
Check the sub-process through the Process API
This command stops the running main-process and interrupts the RTMP stream.
If a process has the order=start, does not return exec=running, and progress.packet is not ascending or >0, it indicates an error.
The current and the last three process logs are available in the report.
The log entries in the response
rtmp://localhost:1935/main: Broken pipe
rtmp://localhost:1935/main I/O error
indicate that the video source is unreachable, so the process fails.
This command enables the main-process again.
The exec status in the response is running again.
Stops and deletes the sub-process via the Process API
Stops and deletes the main-process via the Process API
The API allows you to manipulate the contents of the available filesystems.
Get the last log lines of the Core application.
The last log events are kept in memory and are accessible via the /api/v3/log
endpoint. You can either retrieve the log lines in the format the Core is writing them to the console (?format=console
) or in raw format (?format=raw
). By default they are returned in "console" format.
You can define the number of last log lines kept in memory by either setting an appropriate value in the config (log.max_lines
) or via an enviroment variable (CORE_LOG_MAX_LINES
).
Example:
The GoClient always returns the logs in "raw" format.
Description:
The disk filesystem gives access to the actual directory that has been provided in the configuration as . It accessible only for retrieval via HTTP under /.
Given that the requested file exists, the returned Content-Type
is based solely on the file extension. For a list of known mime-types and their extensions see in the configuration.
Example:
Path is the complete file path incl. file name (/a/b/c/1.txt).
The contents for the upload has to be provided as an io.Reader
.
After the successful upload the file is available at /example.txt
and /api/v3/fs/disk/example.txt
.
Description:
Listing all currently stored files is done by calling /api/v3/fs/disk
. It also accepts the query parameters pattern
, sort
(name,
size,
or lastmod
) and order
(asc
or desc
). If none of the parameters are given, all files will be listed sorted by their last modification time in ascending order.
Description:
For downloading a file you have to specify the complete path and filename. The Content-Type
will always be application/data
.
Example:
The returned data is an io.ReadCloser
.
Description:
For deleting a file you have to specify the complete path and filename.
Example:
Description:
With auth enabled, you have to retrieve a JWT token before you can access the API calls.
Send the username and password, as defined in and , in the body of the request to the /api/login
endpoint in order to obtain valid access and refresh JWT.
Example:
On successful login, the response looks like this:
Use the access_token
in all subsequent calls to the /api/v3/
endpoints, e.g.
The expiry date is stored in the payload of the access token exp
field, or the seconds until it expires is stored in the field exi
.
In order to obtain a new access token, use the refresh_token
for a call to /api/login/refresh
:
After the refresh token expires, you have to login again with your username and password.
By creating a new core client, the login automatically happens. If the login fails, coreclient.New()
will return an error.
Description:
Example:
In order to obtain a new access token, use the refresh_token
for a call to /api/login/refresh
. Example:
On successful login, the response looks like this:
The client handles the refresh of the tokens automatically. However, the access_token
can also be updated manually:
The client handles the refresh of the tokens automatically. However, you can extract the currently used tokens from the client:
You can use these tokens to continue this session later on, given that at least the refresh token didn't expire yet. This saves the client a login round-trip:
The username and password should be provided as well, in case the refresh token expires.
Once the refresh token expires, you have to login again with your username and password, or a valid Auth0 token.
Description:
A very simple in-memory filesystem is available which is only accessible via HTTP under /memfs
. Use the POST
or PUT
method with the path to that file. The body of the request contains the contents of the file. No particular encoding or Content-Type
is required. The file can then be downloaded from the same path.
This filesystem is practical for often changing data (e.g. HLS live stream) in order not to stress the disk or to wear out a flash drive. Also you don't need to setup a RAM drive or similar on your system.
The returned Content-Type
is based solely on the file extension. For a list of known mime-types and their extensions see in the configuration.
It is strongly recommended to enable a username/password (HTTP Basic-Auth) protection for any PUT/POST
and DELETE
operations on /memfs. GET
operations are not protected.
By default HTTP Basic-Auth is enabled with the username "admin" and a random password.
Use these endpoints to, e.g., store HLS chunks and .m3u8 files (in contrast to an actual disk or a ramdisk):
Then you can play it generally with, e.g.,
ffplay http://127.0.0.1:8080/memfs/foobar.m3u8
.
Example:
The contents for the upload has to be provided as an io.Reader
.
After the successful upload the file is available at /memfs/example.txt
and /api/v3/fs/mem/example.txt
.
Description:
Listing all currently stored files is done by calling /api/v3/fs/mem
. It also accepts the query parameters pattern
, sort
(name,
size,
or lastmod
) and order
(asc
or desc
). If none of the parameters are given, all files will be listed sorted by their last modification time in ascending order.
Example:
Description:
For downloading a file you have to specify the complete path and filename. The Content-Type
will always be application/data
.
Example:
The returned data is an io.ReadCloser
.
Description:
Linking a file will return a redirect to the linked file. The target of the redirect has to be in the body of the request.
Example:
This is not implemented in the client.
Description:
For deleting a file you have to specify the complete path and filename.
Example:
Description:
The /api/v3/config
endpoints allow you to inspect, modify, and reload the configuration of the datarhei Core.
Retrieve the currently active Core configuration with additionally a list of all fields that have an override by an environment variable. Such fields can be changed, but it will have no effect because the enviroment variable will always override the value.
Example:
The actual config is in config.Config
which is an interface{}
type. Depending on the returned version, you have to cast it to the corresponding type in order to access the fields in the config:
Description:
Upload a modified configuration. You can provide a partial configuration with only the fields you want to change. The minimal valid configuration you have to provide contains the version number:
Example:
This has no effect until the configuration is reloaded.
Description:
After changing the configuration, the datarhei Core has to be restarted in order to reload the changed configuration.
Example:
Configuration reload will restart the Core! The in-memory file system and sessions will remain intact.
Description:
Complete config example:
Required fields: version
The contents of the disk filesystem at / are also accessible via the API in the same way as described above, but with the same protection as the API (see ) for all operations. It is also possible to list all files that are currently in the filesystem.
With the pattern
parameter you can filter the list based on a , with the addition of the **
placeholder to include multiple subdirectories, e.g. listing all .ts
file in the root directory has the pattern /*.ts
, listing all .ts
file in the whole filesystem has the pattern /**.ts
.Example:
Send a valid Auth0 access JWT in the Authorization
header to the /api/login
endpoint in order to obtain an access and refresh JWT. The Auth0 tenant and the allowed users must be defined in the .
The contents of the /memfs
are also accessible via the API in the same way as described above, but with the same protection as the API (see ) for all operations. It is also possible to list all files that are currently in the filesystem.
With the pattern
parameter you can filter the list based on a , with the addition of the **
placeholder to include multiple subdirectories, e.g. listing all .ts
file in the root directory has the pattern /*.ts
, listing all .ts
file in the whole filesystem has the pattern /**.ts
.
The profiling endpoint allows you to fetch profiling information from a running datarhei Core instance. It has to be enabled with debug.profiling in the config.
Navigate you browser to /profiling
where you can access different diagnostic solutions as described in https://go.dev/doc/diagnostics.
The /ping
endpoint returns a plain text pong
response. This can be used for liveliness and/or latency checks.
This is currently not implemented.
Sessions are tracking user actions regarding pushing and pulling video data to and from the core. There are different kind of sessions that are captured:
HLS Sessions (hls
collector)
How many users are watching an HLS stream
HLS Ingress Sessions (hlsingress
collector)
How many users are publishing a stream via the in-memory filesystem
HTTP Sessions (http
collector)
How many users are reading or writing http data via the build-in HTTP server
RTMP Sessions (rtmp
collector)
How many users are publishing or watching a stream via the built-in RTMP server
SRT Sessions (srt
collector)
How many users are publishing or watching a stream via the built-in SRT server
FFmpeg Sessions (ffmpeg
collector)
How many streams is FFmpeg using as an input or as an output
The data for each session include the current ingress and egress bandwidth, the total amount of data, an identifier for the session itself, the local end of the stream and the remote end of the stream.
The following API endpoints allow to extract information about the currently active sessions or additionally a summary of the already finished sessions. Each endpoint requires a comma-separated list of collectors as query parameter collectors
.
Example:
Description:
Example:
Description:
Query the collected metrics.
The core can collect metrics about itself, the system it is running on, and about the FFmpeg processes it is executing. This is not enabled by default. Please check the metrics configuration for how to enable it and how often metrics should be collected and for how long metrics should be kept available for querying.
Each metric is collected by a collector, like a topic. Each collector can contain several metrics and each metric can have labels to describe a variant of that metric. Think of used space on a filesystem where the variant is whether it is a disk filesystem or a memory filesystem.
All metrics can be scraped by Prometheus from the /metrics
endpoint, if enabled.
In order to know which metrics are available and to learn what they mean, you can retrieve a list of all metrics, their descriptions and labels.
Example:
Description:
All collected metrics can be queried by sending a query to the /api/v3/metrics
endpoint. This query contains the names of the metrics with the labels you are interested in. Leave out the labels in order to get the values for all labels of that metrics. By default you will receive the last collected value. You can also receive a whole timeseries for each metric and label by providing a timerange and stepsize in seconds.
Example:
Description:
S3 filesystems are only accessible via HTTP their configured mountpoint. Use the POST
or PUT
method with the path to that file to (over-)write a file. The body of the request contains the contents of the file. No particular encoding or Content-Type
is required. The file can then be downloaded from the same path.
This filesystem is practical rarely changing data (e.g. VOD) for long term storage.
On this page and in the examples we assume that a S3 storage with the name aws
is mounted on /awsfs
.
The returned Content-Type
is based solely on the file extension. For a list of known mime-types and their extensions see storage.mime_types in the configuration.
It is strongly recommended to enable a username/password (HTTP Basic-Auth) protection for any PUT/POST
and DELETE
operations on /memfs. GET
operations are not protected.
By default HTTP Basic-Auth is not enabled.
The contents of the S3 filesystem mounted on /awsfs
are also accessible via the API in the same way as described above, but with the same protection as the API (see API-Security configuration) for all operations. It is also possible to list all files that are currently in the filesystem.
Example:
The contents for the upload has to be provided as an io.Reader
.
After the successful upload the file is available at /awsfs/example.txt
and /api/v3/fs/aws/example.txt
.
Description:
Listing all currently stored files is done by calling /api/v3/fs/aws
. It also accepts the query parameters pattern
, sort
(name,
size,
or lastmod
) and order
(asc
or desc
). If none of the parameters are given, all files will be listed sorted by their last modification time in ascending order.
With the pattern
parameter you can filter the list based on a glob pattern, with the addition of the **
placeholder to include multiple subdirectories, e.g. listing all .ts
file in the root directory has the pattern /*.ts
, listing all .ts
file in the whole filesystem has the pattern /**.ts
.
Example:
Description:
For downloading a file you have to specify the complete path and filename. The Content-Type
will always be application/data
.
Example:
The returned data is an io.ReadCloser
.
Description:
For deleting a file you have to specify the complete path and filename.
Example:
Description:
Manage FFmpeg processes
The /api/v3/process
family of API call will let you manage and monitor FFmpeg processes by the datarhei Core.
An FFmpeg process definition, as required by the API, consists of its inputs and its outputs and global options. That's a very minimalistic abstraction of the FFmpeg command line and assumes that you know the command line options in order to achieve what you want.
The most minimal process definition is:
This will be translated to the FFmpeg command line:
Let's use this as a starting point for a more practical example. We want to generate a test video with silence audio, encode it to H.264 and AAC and finally send it via RTMP to some destination.
You can give each process an ID with the field id
. There are no restrictions regarding the format or allowed characters. If you don't provide an ID, one will be generated for you. The ID is used to identify the process later in API calls for querying and modifying the process after it has been created. An ID has to be unique for each datarhei Core.
Additionally to the ID you can provide a reference with the field reference
. This allows you provide any information with the process. It will not be interpreted at all. You can use it for e.g. grouping different processes together.
First, we define the inputs:
This will be translated to the FFmpeg command line:
The id
for each input is optional and will be used for later reference or for replacing placeholders (more about placeholders later). If you don't provide an ID for an input, it will be generated for you.
Next, we define the output. Because both the inputs are raw video and raw audio data, we need to encode them.
This will be translated to the FFmpeg command line:
Putting it all together:
All this together results in the command line:
FFmpeg is quite talkative by default and we want to be notified only about errors. We can add global options that will be placed before any inputs:
The inputs and outputs are left out for the sake of brevity. Now we have out FFmpeg command line complete:
The process config also allows you to control what happens, when you create the process, what happens after the process finishes:
The reconnect
option tells the datarhei Core to restart the process in case it finished (either normally or because of an error). It will wait for reconnect_delay_seconds
until the process will be restarted. Set this value to 0
in order to restart the process immediately.
The stale_timeout_seconds
will cause the process to be (forcefully) stopped in case it stales, i.e. no packets are processed for this amount of seconds. Disable this feature by setting the value to 0
.
The datarhei Core is constantly monitoring the vitals of each process. This also includes the memory and CPU consumption. If you have limited resource or you want to have an upper limit for the resources a process is allowed to consume, you can set the limit options in the process config:
The cpu_usage
option sets the limit of CPU usage in percent, e.g. not more than 10% of the CPU should be used for this process. A value of 0 (the default) will disable this option.
The memory_mbytes
options set the limit of memory usage in megabytes, e.g. not more than 50 megabytes of memory should be used. A value of 0
(the default) will disable this option.
If the resource consumption for at least one of the limits is exceeded for waitfor_seconds
, then the process will be forcefully terminated.
For long running processes that produce a lot of files (e.g. a HLS live stream), it can happen that not all created files are removed by the ffmpeg process itself. Or if the process exits and doesn't or can't cleanup any files it created. This leaves files on filesystem that shouldn't be there and just using up space.
With the optional array of cleanup rules for each output, it is possible to define rules for removing files from the memory filesystem or disk. Each rule consists of a glob pattern and a max. allowed number of files matching that pattern or permitted maximum age for the files matching that pattern. The pattern starts with either memfs:
or diskfs:
depending on which filesystem this rule is designated to. Then a glob pattern follows to identify the files. If max_files
is set to a number larger than 0, then the oldest files from the matching files will be deleted if the list of matching files is longer than that number. If max_file_age_seconds
is set to a number larger than 0, then all files that are older than this number of seconds from the matching files will be deleted. If purge_on_delete
is set to true
, then all matching files will be deleted when the process is deleted.
As of version 16.12.0 the prefixes for selecting the filesystem (e.g. diskfs:
or memfs:
) correspond to the configured name of the filesystem, in case you mounted one or more S3 filesystems. Reserved names are disk
, diskfs
, mem
, and memfs
. The names disk
and mem
are synonyms for diskfs
, resp. memfs
. E.g. if you have a S3 filesystem with the name aws
mounted, use the aws:
prefix.
Optional cleanup configuration:
As part of the pattern you can use placeholders.
Examples:
The file on the disk with the ID of the process as the name and the extension .m3u8
.
All files on disk whose names starts with the ID of the process and that have the extension .m3u8
or .ts
.
All files on disk that have the extension .ts and are in the folder structure denoted by the process' reference, ID, and the ID of the output this cleanup rule belongs to.
All files whose name starts with the reference of the process, e.g. /abc_1.ts
, but not /abc/1.ts
.
All files whose name or path starts with the reference of the process, e.g. /abc_1.ts, /abc/1.ts
, or /abc_data/foobar/42.txt
.
References allow you to refer to an output of another process and to use it as input. The address of the input has to be in the form #[processid]:output=[id]
, where [processid]
denotes the ID of the process you want to use the output from, and [id]
denotes the ID of output of that process.
Example:
The second process will use rtmp://someip/live/stream
as its input address.
Placeholders are a way to parametrize parts of the config. A placeholder is surrounded by curly braces, e.g. {processid}
.
Some placeholder require parameters. Add parameters to a placeholder by appending a comma separated list of key/values, e.g. {placeholder,key1=value1,key2=value2}
. This can be combined with escaping.
As of version 16.12.0 the value for a parameter of a placeholder can be a variable. Currently known variables are $processid
and $reference
. These will be replaced by the respective process ID and process reference, e.g. {rtmp,name=$processid.stream}
.
Example:
Assume you have an input process that gets encoded in three different resolutions that are written to the disk. With placeeholders you parametrize the output files. The input and the encoding options are left out for brevity.
In case you use a placeholder in a place where characters needs escaping (e.g. in the options of the tee
output muxer), you can define the character to be escaped in the placeholder by adding it to the placeholder name and prefix it with a ^
.
Example: you have a process with the ID abc:snapshot
and in a filter option you have to escape all :
in the value for the {processid}
placeholder, write {processid^:}
. It will then be replaced by abc\:snapshot
. The escape character is always \
. In case there are \
in the value, they will also get escaped.
All known placeholders are:
Will be replaced by the ID of the process. Locations where this placeholder can be used: input.id
, input.address
, input.options
, output.id
, output.address
, output.options
, output.cleanup.pattern
Will be replaced by the reference of the process. Locations where this placeholder can be used: input.id
, input.address
, input.options
, output.id
, output.address
, output.options
, output.cleanup.pattern
Will be replaced by the ID of the input. Locations where this placeholder can be used: input.address
, input.options
Will be replaced by the ID of the output. Locations where this placeholder can be used: output.address
, output.options
, output.cleanup.pattern
Will be replaced by the provided value of storage.disk.dir. Locations where this placeholder can be used: options
, input.address
, input.options
, output.address
, output.options
As of version 16.12.0 you can use the alternative syntax {fs:disk}
.
Will be replaced by the internal base URL of the memory filesystem. This placeholder is convenient if you change, e.g. the listening port of the HTTP server. Then you don't need to modifiy each process configuration where you use the memory filesystem.
Locations where this placeholder can be used: input.address
, input.options
, output.address
, output.options
Example: {memfs}/foobar.m3u8
will be replaced with http://127.0.0.1:8080/memfs/foobar.m3u8
if the datarhei Core is listening on 8080 for HTTP requests.
As of version 16.12.0 you can use the alternative syntax {fs:mem}
.
As of version 16.12.0. Will be replaced by the internal base URL of the named filesystem. This placeholder is convenient if you change, e.g. the listening port of the HTTP server. Then you don't need to modifiy each process configuration where you use the memory filesystem.
Locations where this placeholder can be used: input.address
, input.options
, output.address
, output.options
Predefined names are disk
and mem
. Additional names correspond to the names of the mounted S3 filesystems.
Example: {fs:aws}/foobar.m3u8
will be replaced with http://127.0.0.1:8080/awsfs/foobar.m3u8
if the datarhei Core is listening on 8080 for HTTP requests, and the S3 storage is configured to have name aws
and the mountpoin /awsfs
.
Will be replaced by the internal address of the RTMP server. This placeholder is convenient if you change, e.g. the listening port or the app of the RTMP server. Then you don't need to modifiy each process configuration where you use the RTMP server.
Required parameter: name
(name of the stream)
Locations where this placeholder can be used: input.address
, output.address
Will be replaced by the internal address of the SRT server. This placeholder is convenient if you change, e.g. the listening port or the app of the SRT server. Then you don't need to modifiy each process configuration where you use the SRT server.
Required parameters: name
(name of the resource), mode
(either publish
or request
)
Locations where this placeholder can be used: input.address
, output.address
Requires Core v16.11.0+
As of version 16.12.0 the placeholder accepts the parameter latency
and defaults to 20ms.
Create a new process. The ID of the process will be required in order to query or manipulate the process later on. If you don't provide an ID, it will be generated for you. The response of the successful API call includes the process config as it has been stored (including the generated ID).
Example:
Description:
These API call lets you query the current state of the processes that are registered on the datarhei Core. For each process there are several aspects available, such as:
config
The config with which the process has been created.
By default all aspects are included in the process listing. If you are only interested in specific aspects, then you can use the ?filter=...
query parameter. Provide it a comma-separated list of aspects and then only those will be included in the response, e.g. ?filter=state,report
.
This API call lists all processes that are registered on the datarhei Core. You can restrict the listed processes by providing
a comma-separated list of specific IDs (?id=a,b,c
)
a reference (?reference=...
)
a pattern for the matching ID (?idpattern=...
)
a pattern for the matching references (?refpattern=...
)
Example:
Description:
If you know the ID of the process you want the details about, you can fetch them directly. Here you can apply the filter
query parameter regarding the aspects.
Example:
The second paramter of client.Process
is the list of aspects. An empty list means all aspects.
Description:
This endpoint lets you fetch directly the config of a process.
Example:
Description:
You can change the process configuration of an existing process. It doesn't matter if the process to be updated is currently running or not. The current order will be transfered to the updated process.
The new process configuration is not required to have the same ID as the one you're about to replace. After the successful update you have to use the new process ID in order to query or manipulate the process.
The new process configuration is checked for its validity before it will be replace the current process configuration.
As of version 16.12.0 you can provide a partial process config for updates, i.e. you need to PUT
only those fields that actually change.
Example:
Client details
Description:
Delete a process. If the process is currently running, it will be stopped gracefully before it will be removed.
Example:
Description:
Send a command to a process
There are basically two commands you can give to a process: start
or stop
. This is the order for the process.
Additionally to these two commands are the commands restart
which is sending a stop
followed by a start
command packed in one command, and reload
which is the same as if you would update the process config with itself, e.g. in order to update references to another process.
start
the process. If the process is already started, this won't have any effect.
stop
the process. If the process is already stopped, this won't have any effect.
restart
the process. If the process is not running, this won't have any effect.
reload
the process. If the process was running, the reloaded process will start automatically.
Example:
Description:
Probing a process means to detect the vitals of the inputs of a process, e.g. frames, bitrate, codec, ... for each input and stream, e.g. a video file on disk may contain two video streams (low and high resolution, audio and subtitle streams in different languages).
A process must already exists before it can be probed. During probing only the global FFmpeg options and the inputs are used to construct the FFmpeg command line.
The probe returns an object with an array of the detected streams and any array of lines from the output from the ffmpeg command.
In the following example we assume the process config with these inputs for the process test. Parts that are not relevant for probing have been left out for brevity:
The expected result would be:
The field index
refers to the input and the field stream
refers to the stream of an input.
Example: probe the inputs of a process with the ID test
:
Description:
Store metadata in a process
The metadata for a process allows you to store arbitrary JSON data with that process, e.g. if you have an app that uses the datarhei Core API you can store app-specific information in the metadata.
In order not to conflict with other apps that might write to the metadata as well, you have to provide a key under which your metadata is stored. Think of a namespace or similar.
Add or update the metadata for a process and key.
Example: Write a metadata object into the metadata for the process test
and key desc
.
Description:
Read out the stored metadata. The key is optional. If the key is not provided, all stored metadata for that process will be in the result.
Example: read the metadata object for process test
that is stored under the key desc
.
Description:
The process report captures the log output from FFmpeg. The output is split up in a prelude and log output. The prelude is everything before the first progress line is printed. Everything after that is part of the log. The progress lines are not part of the report. Each log line comes with a timestamp of when it has been captured from the FFmpeg output.
The process report helps you to analyze any problems with the FFmpeg command line build from the inputs, outputs, and their options in the process configuration.
If the process is running, the FFmpeg logs will be written to the current report. As soon as the process finishes, the report will be moved to the history. The report history is also part of the response of this API endpoint.
You can define the number of log lines and how many reports should be kept in the history in the config for the datarhei Core in the section.
The following is an example of a report.
Example: read the report of a process with the ID test
:
Description:
The process state reflects the current vitals of an process. This includes for example the runtime the process is already in this state, the order (if it should be running or stopped, see ), the current CPU and memory consumption, the actual FFmpeg command line, and some more.
If the process is running you will recieve progress data additionally to the above mentioned metrics. Progress data includes for each input and output stream the bitrate, framerate, bytes read/written, the speed of the processing, duplicated, dropped frames, and so on.
In the following an example output for a running process:
Example: read the state of a process with the ID test:
Description:
The autostart
option will cause the process to be started as soon it has been created. This is equivalent to setting this option to false
, creating the process, and issuing the start .
With the pattern
parameter you can select files based on a , with the addition of the **
placeholder to include multiple subdirectories, e.g. selecting all .ts
files in the root directory has the pattern /*.ts
, selecting all .ts
file in the whole filesystem has the pattern /**.ts
.
This will create three files with the names a_b_360.mp4
, a_b_720.mp4
, and a_b_1080.mp4
to the directory as defined in .
Example: {rtmp,name=foobar.stream}
will be replaced with rtmp://127.0.0.1:1935/live/foobar.stream?token=abc
, if the RTMP server is configured to listen on port 1935, has the app /live and requires the token abc
. See the .
Example: {srt,name=foobar,mode=request}
will be replaced with srt://127.0.0.1:6000?mode=caller&transtype=live&streamid=foobar,mode:request,token=abc&passphrase=1234
, if the SRT server is configured to listen on port 6000, requires the token abc
and the passphrase 1234
. See the .
You can control the process with .
state
The current state of the process, e.g. if its currently running and for how long. If a process is running also the progress data is included. This includes a list of all input and output streams with all their vitals (frames, bitrate, codec, ...). .
report
The logging output from the FFmpeg process and a history of previous runs of the same process. .
metadata
All metadata associated with this process. .
With the idpattern
and refpattern query parameter you can select process IDs and/or references based on a . If you provide a list of specific IDs or a reference and patterns for IDs or references, then the patterns will be matched first. The resulting list of IDs and references is then checked against the provided list of IDs or reference.
The datarhei Core includes a simple SRT server for publishing and playing streams. Check out the SRT configuration and the SRT guide. This API endpoint will list the details of all currently publishing and playing streams.
This endpoint is still experimental and may change in a later minor version increase.
The datarhei Core includes a simple RTMP server for publishing and playing streams. Check out the RTMP configuration and the RTMP guide. This API endpoint will list the names of all currently publishing streams.
Skills denote the capabilities of the used FFmpeg binary. It includes version information, supported input and output protocols, available hardware accelerators, supported formats for muxing and demuxing, filters, and available input and output devices.
Example:
Description:
Reloading the skills might be necessary if you plug e.g. an USB device. It will only show up in the list of available devices if they are probed again.
Example:
Data flows
Core launches and monitors FFmpeg processes
FFmpeg can use HTTP, RTMP, and SRT services as streaming backends for processing incoming and outgoing A/V content.
Several storage locations are available for the HTTP service: In-memory file system, aka MemFS (very fast without disk I/O.) Disk file system, aka DiskFS, for storage on the HDD/SSD of the host system.
Optionally, FFmpeg can access host system devices such as GPU and USB interfaces (requires FFmpeg built-in support).
FFmpeg can also use external input and output URLs.
Go v1.18+ (Download here)
Clone the repository and build the binary:
After the build process, the binary is available as core
For more build options, run make help.
If you want to run the binary on a different operating system and/or architecture, you create the appropriate binary by simply setting some environment variables, e.g.
Build the Docker image and run it to try out the API with FFmpeg
The source code is formatted with go fmt
, or run make fmt
. Static analysis of the source code is done with staticcheck
(see staticcheck), or run make lint
.
Before committing changes, you should run make commit
to ensure that the source code is in shape.
Date: 12.02.2021 Version: 2.1.0 (non-public release)
Goal: Bandwidth Limitation Test (HLS Sessions)
The limitation was the network card, not the CPU, memory, or application.
We offer multiple forms of service and can provide support for any streaming, whether for professional broadcasting or personal use. We are excited to learn more about your project.
To ensure your project's success, we offer installation services and ongoing support from the datarhei team. Helping Hands includes installing a datarhei Restreamer or datarhei Core on a server instance and managing the server as a managed service.
Installation
Configuration
Updates
Fix errors
Communication and assistance can be accessed through email or chat on our and conducted in English and German. We appreciate your support of "Helping Hands."
Within 48 hours of becoming a patron or sponsor on Open Collective, a representative from will contact you for further information regarding the installation process.
Service level agreements are in effect for the duration of the active donation.
If you have questions about the Helping Hands Service Level Agreement (SLA), contact us on or by email at info@datarhei.com. We're here to help!"
If you have a commercial request, such as a bug fix or feature enhancement, just let us know, and we'll be happy to provide a quote.
Commercial requests fund OSS and are always a priority.
Please fill out this form to help us respond to your request as quickly as possible. We're able to assist you in both German and English. However, please remember that we cannot process any requests not submitted through this form.
We will answer as soon as possible. But without a guarantee (except for patrons and open collective sponsors.)
Every little bit helps! Consider becoming a patron on Patreon or backer on Open Collective to support keeping the software free.
💙💚🧡💜 Help datarhei with a rating or review on Google. #feelsgoodtobeloved
💛 Thanks for using datarhei software. Your datarhei.com team
The Core-FFmpeg bundle uses Docker's multi-stage process so that FFmpeg and the Core can be updated and maintained independently.
When building the Core-FFmpeg bundle, an FFmpeg image is used. The previously created Golang libraries and folder structures are copied into this image.
This process speeds up the creation of the final Core-FFmpeg bundle, as existing or previously created images can be used, and compiling all the code is no longer required.
The following base images are available:
docker.io/datarhei/base:alpine-ffmpeg-latest
docker.io/datarhei/base:alpine-ffmpeg-rpi-latest
docker.io/datarhei/base:ubuntu-ffmpeg-cuda-latest
docker.io/datarhei/base:ubuntu-ffmpeg-vaapi-latest
docker.io/datarhei/base:alpine-core-latest
docker.io/datarhei/base:ubuntu-core-latest
Specific versions are available on the Docker website:
Dockerfile without --disable-debug and --disable-doc.
Arguments:
default Dockerfile: Dockerfile.alpine Image name: datarhei/base:alpine-ffmpeg-latest
rpi Dockerfile: Dockerfile.alpine.rpi Image name: datarhei/base:alpine-ffmpeg-rpi-latest
cuda Dockerfile: Dockerfile.ubuntu.cuda Image name: datarhei/base:ubuntu-ffmpeg-cuda-latest
vaapi Dockerfile: Dockerfile.ubuntu.vaapi Image name: datarhei/base:alpine-ffmpeg-vaapi-latest
You can find the Dockerfile for the bundle (Dockerfile.bundle) in the cloned Core repository.
Docker supports multi-architecture images via --platform linux/amd64,linux/arm64,linux/arm/v7.
We're the team at FOSS GmbH () from Switzerland, the creators of . We'll do our best to get back to you within 1-2 days during our regular business hours, excluding weekends and holidays in Switzerland and Germany. No matter what you need help with, we're here to provide professional support, software development, and consulting for anything related to datarhei software.
Private and non-commercial requests can be discussed and resolved on and public channels.
>>
List all currently publishing RTMP streams.
OK
Get the last log lines of the Restreamer application
Format of the list of log events (*console, raw)
application log
Fetch minimal statistics about a process, which is not protected by any auth.
ID of a process
OK
List all files on the memory filesystem. The listing can be ordered by name, size, or date of last modification in ascending or descending order.
glob pattern for file names
none, name, size, lastmod
asc, desc
OK
Retrieve a new access token by providing the refresh token
OK
Remove a file from the memory filesystem
Path to file
OK
Fetch a file from the memory filesystem
Path to file
OK
Remove a file from the memory filesystem
Path to file
OK
Delete a process by its ID
Process ID
OK
Create a link to a file in the memory filesystem. The file linked to has to exist.
Path to file
Path to the file to link to
Created
Retrieve the previously stored JSON metadata under the given key. If the key is empty, all metadata will be returned.
Process ID
Key for data store
OK
Retrieve valid JWT access and refresh tokens to use for accessing the API. Login either by username/password or Auth0 token
Login data
OK
Fetch a file from the memory filesystem
Path to file
OK
Retrieve valid JWT access and refresh tokens to use for accessing the API. Login either by username/password or Auth0 token
Login data
OK
Add arbitrary JSON metadata under the given key. If the key exists, all already stored metadata with this key will be overwritten. If the key doesn't exist, it will be created.
Process ID
Key for data store
Arbitrary JSON data. The null value will remove the key and its contents
OK
Writes or overwrites a file on the memory filesystem
Path to file
File data
Created
Writes or overwrites a file on the memory filesystem
Path to file
File data
Created
Issue a command to a process: start, stop, reload, restart
Process ID
Process command
OK
Probe an existing process to get a detailed stream information on the inputs.
Process ID
OK
video
audio
common
Get the logs and the log history of a process.
Process ID
OK
Get the configuration of a process. This is the configuration as provided by Add or Update.
Process ID
OK
Replace an existing process.
Process ID
Process config
OK
List all known processes. Use the query parameter to filter the listed processes.
Comma separated list of fields (config, state, report, metadata) that will be part of the output. If empty, all fields will be part of the output.
Return only these process that have this reference value. If empty, the reference will be ignored.
Comma separated list of process ids to list. Overrides the reference. If empty all IDs will be returned.
Glob pattern for process IDs. If empty all IDs will be returned. Intersected with results from refpattern.
Glob pattern for process references. If empty all IDs will be returned. Intersected with results from idpattern.
OK
kbit/s
kbit/s
General
Video
Audio
kbytes
kbit/s
General
Video
Audio
kbytes
kbytes
List a process by its ID. Use the filter parameter to specifiy the level of detail of the output.
Process ID
Comma separated list of fields (config, state, report, metadata) to be part of the output. If empty, all fields will be part of the output
OK
kbit/s
kbit/s
General
Video
Audio
kbytes
kbit/s
General
Video
Audio
kbytes
kbytes
Add a new FFmpeg process
Process config
OK
Refresh the available FFmpeg capabilities.
OK
List all detected FFmpeg capabilities.
OK
Get the state and progress data of a process.
Process ID
OK
kbit/s
kbit/s
General
Video
Audio
kbytes
kbit/s
General
Video
Audio
kbytes
kbytes
List all currently publishing SRT streams. This endpoint is EXPERIMENTAL and may change in future.
OK
The available space in the receiver's buffer, in bytes
The available space in the sender's buffer, in bytes
Estimated bandwidth of the network link, in Mbps
The number of packets in flight
The maximum number of packets that can be "in flight"
Transmission bandwidth limit, in Mbps
Maximum Segment Size (MSS), in bytes
Accumulated difference between the current time and the time-to-play of a packet that is received late
Current minimum time interval between which consecutive packets are sent, in microseconds
The total number of received ACK (Acknowledgement) control packets
Instantaneous (current) value of pktRcvBuf, expressed in bytes, including payload and all headers (IP, TCP, SRT)
The timespan (msec) of acknowledged packets in the receiver's buffer
The number of acknowledged packets in receiver's buffer
Same as pktRecv, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
Same as pktRcvDrop, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
The total number of dropped by the SRT receiver and, as a result, not delivered to the upstream application DATA packets
The total number of received KM (Key Material) control packets
Same as pktRcvLoss, but expressed in bytes, including payload and all the headers (IP, TCP, SRT), bytes for the presently missing (either reordered or lost) packets' payloads are estimated based on the average packet size
The total number of SRT DATA packets detected as presently missing (either reordered or lost) at the receiver side
The total number of received NAK (Negative Acknowledgement) control packets
The total number of received DATA packets, including retransmitted packets
The total number of retransmitted packets registered at the receiver side
Timestamp-based Packet Delivery Delay value set on the socket via SRTO_RCVLATENCY or SRTO_LATENCY
Same as pktRcvUndecrypt, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
The total number of packets that failed to be decrypted at the receiver side
Same as pktRecvUnique, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
The total number of unique original, retransmitted or recovered by the packet filter DATA packets received in time, decrypted without errors and, as a result, scheduled for delivery to the upstream application by the SRT receiver.
Instant value of the packet reorder tolerance
Smoothed round-trip time (SRTT), an exponentially-weighted moving average (EWMA) of an endpoint's RTT samples, in milliseconds
Instantaneous (current) value of pktSndBuf, but expressed in bytes, including payload and all headers (IP, TCP, SRT)
The timespan (msec) of packets in the sender's buffer (unacknowledged packets)
The number of packets in the sender's buffer that are already scheduled for sending or even possibly sent, but not yet acknowledged
Same as pktSndDrop, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
The total number of dropped by the SRT sender DATA packets that have no chance to be delivered in time
The total accumulated time in microseconds, during which the SRT sender has some data to transmit, including packets that have been sent, but not yet acknowledged
The total number of sent KM (Key Material) control packets
The total number of data packets considered or reported as lost at the sender side. Does not correspond to the packets detected as lost at the receiver side.
Timestamp-based Packet Delivery Delay value of the peer
The total number of sent ACK (Acknowledgement) control packets
Same as pktSent, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
The total number of sent NAK (Negative Acknowledgement) control packets
The total number of sent DATA packets, including retransmitted packets
Same as pktRetrans, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
The total number of retransmitted packets sent by the SRT sender
Same as pktSentUnique, but expressed in bytes, including payload and all the headers (IP, TCP, SRT)
The total number of unique DATA packets sent by the SRT sender
The time elapsed, in milliseconds, since the SRT socket has been created
Retrieve the currently active Restreamer configuration
OK
seconds
seconds
Update the current Restreamer configuration by providing a complete or partial configuration. Fields that are not provided will not be changed.
Restreamer configuration
seconds
seconds
OK
List all known metrics with their description and labels
OK
Get a minimal summary of all active sessions (i.e. number of sessions, bandwidth).
Comma separated list of collectors
Active sessions listing
kbit/s
kbit/s
Query the collected metrics
Metrics query
OK
Get a summary of all active and past sessions of the given collector.
Comma separated list of collectors
Sessions summary
mbit/s
mbit/s
kbit/s
kbit/s
mbit/s
mbit/s
Reload the currently active configuration. This will trigger a restart of the Core.
OK
List all files on a filesystem. The listing can be ordered by name, size, or date of last modification in ascending or descending order.
Name of the filesystem
glob pattern for file names
none, name, size, lastmod
asc, desc
OK
Remove a file from a filesystem
Name of the filesystem
Path to file
OK
Fetch a file from a filesystem
Name of the filesystem
Path to file
OK
Writes or overwrites a file on a filesystem
Name of the filesystem
Path to file
File data
Created