Only this pageAll pages
Powered by GitBook
1 of 61

Core

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Guides

Loading...

Loading...

Loading...

Loading...

General

Loading...

API

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

API / FFmpeg

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

API / RTMP

Loading...

API / SRT

Loading...

Development

Loading...

Loading...

Loading...

Loading...

Loading...

Database

Settings for the database of operational data, e.g. processes, metadata, persisted session summaries, and so on.

Configuration

{
   "db": {
      "dir": "./config"
   },
}
CORE_DB_DIR="./config"

dir (string)

Directory for holding the operational data. The path is relative to where the binary is executed and the directory must exist.

The default value is ./config

Update & migration

Docker way

{image} and {params...} are placeholders.

Systemd way

{image} and {params...} are placeholders.

Hostname

Settings for the host datarhei Core is running on.

Configuration

{
   "host": {
        "name": ["domain.com"],
        "auto": true
   }
}
CORE_HOST_NAME=domain.com
CORE_HOST_AUTO=true

name (array)

A list of public host/domain names or IPs this host is reacheable under. For the ENV use a comma separated list of public host/domain names or IPs.

The default is an empty list.

auto (bool)

Enable detection of public IP addresses in case the list of names is empty.

By default this is set to true.

Filesystem

The API allows you to manipulate the contents of the available filesystems.

Logging

Logging settings for the datarhei Core.

Configuration

level (string)

The verbosity of the logging. The datarhei Core is writing the logs to stderr. Possible values are:

  • silent No logging at all.

  • error Only errors will be logged.

  • warn Warnings and errors will be logged.

  • info General information, warnings, and errors will be logged.

  • debug Debug messages and every thing else will be logged. This is very chatty.

The default logging level is info.

topics (array)

Logging topics allow you to restrict what type of messages will be logged. This is practical if you enable debug logging and want to see only the logs you're interested in. An empty list of topics means that all topics will be logged.

An non-exhaustive list of logging topics:

  • cleanup

  • config

  • core

  • diskfs

  • http

  • httpcache

  • https

  • let's encrypt

  • memfs

  • process

  • processstore

  • rtmp

  • rtmp/s

  • rtmps

  • update

  • service

  • session

  • sessionstore

  • srt

By default all topics are logged.

max_lines (integer)

The log is also kept in memory for retrieval via the API. This value defines how many lines shoul dbe kept in memory.

The default is 1000 lines.

Web-Interface

Known interfaces based on the Core.

Restreamer-UI

The Restreamer-UI is an easy interface to manage multiple live and restreams on different platforms like the website, YouTube, Facebook and more.

Quick start

1. Start the Core

2. Start the Interface

3. Open the Interface and connect it to the running Core

RTMP

The datarhei Core includes a simple RTMP server for publishing and playing streams. It is not enabled by default. You have to enable it in the config in the or via the corresponding environemnt variables.

Example:

In the above example /live is the RTMP app and 12345.stream is the name of the resource.

Token

In order to protect access to the RTMP server you should define a token in the configuration. Only with a valid token it is possible to publish or play resources. The token has to be appended to the RTMP URL as a query string, e.g. rtmp://127.0.0.1:1935/live/12345.stream?token=abc.

As of version 16.12.0 you can write the token as last part of the path in URL instead of as a query string. The above example will looke like rtmp://127.0.0.1:1935/live/12345.stream/abc. This allows you to enter the token into the "stream key" field in some clients.

API

Via the in the API you can gather a list of the currently publishing RTMP resources.

In-memory
Disk
S3
{
   "log": {
      "level": "info",
      "topics": [],
      "max_lines": 1000
   }
}
CORE_LOG_LEVEL=info
CORE_LOG_TOPIC=[]
CORE_LOG_MAXLINES=1000
rtmp://127.0.0.1:1935/live/12345.stream
rtmp section
RTMP endpoint

Profiling

The profiling endpoint allows you to fetch profiling information from a running datarhei Core instance. It has to be enabled with debug.profiling in the config.

Navigate you browser to /profiling where you can access different diagnostic solutions as described in https://go.dev/doc/diagnostics.

docker pull {image}
docker kill core
docker rm core
docker run {params ...} {image}
docker pull {image}
docker kill core
docker rm core
docker run {params ...} {image}
docker pull {image} && \
systemctl restart core.service
docker run -d --name core -p 8080:8080 datarhei/core:latest
docker run -d --name restreamer-ui -p 3000:3000 datarhei/restreamer-ui:latest
http://127.0.0.1:3000?address=http://127.0.0.1:8080

About

Manual version Build for Core v16.11+

The datarhei Core is a process management solution for FFmpeg that offers a range of interfaces for media content, including HTTP, RTMP, SRT, and storage options. It is optimized for use in virtual environments such as Docker. It has been implemented in various contexts, from small-scale applications like Restreamer to large-scale, multi-instance frameworks spanning multiple locations, such as dedicated servers, cloud instances, and single-board computers. The datarhei Core stands out from traditional media servers by emphasizing FFmpeg and its capabilities rather than focusing on media conversion.

__

Objectives of development

The objectives of development are:

  • Unhindered use of FFmpeg processes

  • Portability of FFmpeg, including management across development and production environments

  • Scalability of FFmpeg-based applications through the ability to offload processes to additional instances

  • Streamlining of media product development by focusing on features and design.

What issues have been resolved thus far?

Process management

  • Run multiple processes via API

  • Unrestricted FFmpeg commands in process configuration.

  • Error detection and recovery (e.g., FFmpeg stalls, dumps)

  • Referencing for process chaining (pipelines)

  • Placeholders for storage, RTMP, and SRT usage (automatic credentials management and URL resolution)

  • Logs (access to current stdout/stderr)

  • Log history (configurable log history, e.g., for error analysis)

  • Resource limitation (max. CPU and MEMORY usage per process)

  • Statistics (like FFmpeg progress per input and output, CPU and MEMORY, state, uptime)

  • Input verification (like FFprobe)

  • Metadata (option to store additional information like a title)

Media delivery

  • Configurable file systems (in-memory, disk-mount, S3)

  • HTTP/S, RTMP/S, and SRT services, including Let's Encrypt

  • Bandwidth and session limiting for HLS/MPEG DASH sessions (protects restreams from congestion)

  • Viewer session API and logging

Misc

  • HTTP REST and GraphQL API

  • Swagger documentation

  • Metrics incl. Prometheus support (also detects POSIX and cgroups resources)

  • Docker images for fast setup of development environments up to the integration of cloud resources

Quick start

Changelog

Releases

API Swagger-Documentation

Swagger documentation.

The documentation of the API is available on /api/swagger/index.html

Example:

docker run -d --name core -p 8080:8080 datarhei/core:latest

Open: http://127.0.0.1:8080/api/swagger/index.html

To generate the API documentation from the code, use swag:

make init swagger
make run

After the first command the swagger definition can be found at docs/swagger.json or docs/swagger.yaml.

The second command will build the core binary and start it. With the default configuration, the Swagger UI is available at http://localhost:8080/api/swagger/index.html.

Public demo:

Ping

The /ping endpoint returns a plain text pong response. This can be used for liveliness and/or latency checks.

curl http://127.0.0.1:8080/ping

This is currently not implemented.

Installation

How to start and configure the Core via Docker.

Quick start

1. Install Docker if not present

Docker images can also be run with other -compatible container services, like , , , and .

Native installations without Docker are possible, but not supported. Issues are not observed.

2. Continue with the Beginner's Guide

Docker images

Select the CPU architecture and, if desired, the GPU support:

Pi 3 supports MMAL/OMX for 32 Bit (64 Bit is not supported).

Pi4 supports V4LM2M for 32/64 Bit

Hint: raspi-config requires gpu_mem=256 or more.

Docker run {...params}

All default values can be changed and are described on the page.

Complete example:

${PWD} creates a folder structure in the folder where the command is issued. Please correct this if necessary.

Replace %USERPROFILE% with something like c:/myfolder

Directory exports

$HOST_DIR can be adjusted without reconfiguring the app. For the $CORE_DIR, check the configuration instructions.

Configuration directory

Directory for holding the config and operational data.

${PWD} creates a folder structure in the folder where the command is issued. Please correct this if necessary.

Data directory

Directory on disk, exposed on HTTP path “/“.

${PWD} creates a folder structure in the folder where the command is issued. Please correct this if necessary.

Port

$HOST_PORT can be adjusted without reconfiguring the app. For the $CORE_PORT, check the configuration instructions.

HTTP Port

HTTP listening address.

HTTPS Port

HTTPS listening address.

RTMP Port

RTMPS Port

RTMP server listen address.

SRT Port (UDP)

SRT server listen address.

/udp is required for SRT port-mapping.

With --net=host the container is started without network isolation. In this case, port forwarding is not required.

Environment variables

More in the instructions.

Device access

Allow FFmpeg to access GPU's, USB and other devices available in the container.

Network issues (seccomp)

If seccomp is active and no internal-to-external communication is possible:

Docker commands

Start in background

Stop

Kill and remove the instance

Update the local image and restart the Core

Top

Logging

Systemd

To manage the Core container via systemd (systemd is a Linux process daemon.)

Service file

Adjust the docker command options according to your setup.

Commands

Install

Uninstall

Start

Stop

Update image

Status

Logging

Storage

Settings for accessing the available storage types. The storages are accessible via HTTP, mounted to different paths.

Configuration

mimetypes_file (string)

Path to a file with the mime-type definitions. This is a file with the MIME types has one MIME type per line followed by a list of file extensions (including the "."). Files served from the storages will have the matching mime-type associated to it.

Example:

Relative paths are interpreted relative to where the datarhei Core binary is executed.

Default: ./mime.types

cors.origins (array)

Define a list of allowed CORS origin for accessing the storages.

By default it contains the only element *, allowing access from anywhere.

Disk

The disk storage is mounted at / via the HTTP server.

In-memory

The memory storage is mounted at /memfs via the HTTP server.

S3

The S3 storage is mounted at the configured path via the HTTP server.

S3 storage is available as of version 16.12.0

In-memory

The settings for the in-memory filesystem. This filesystem is accessible on /memfs via HTTP. This filesystem can only be accessed via HTTP. Writing to and deleting from the filesystem can be restricted by HTTP basic auth.

Configuration

auth.enable (bool)

Set this value to true in order to enable basic auth for PUT, POST, and DELETE operations on /memfs. Read access (GET, HEAD) is not restricted. If enabled, you have to define a username and a password.

It is highly recommended to enable basic auth for write operations on /memfs.

By default this value is set to false.

auth.username (string)

Username for Basic-Auth of /memfs. This has to be set if basic auth is enabled.

By default this value is not set, i.e. an empty string.

auth.password (string)

Password for Basic-Auth of /memfs. This has to be set if basic auth is enabled.

By default this value is not set, i.e. an empty string.

max_size_mbytes (unsigned integer)

The maximum amount of data that is allowed to be stored in this filesystem. The value is interpreted as megabytes. A 507 Insufficient Storage will be returned if you hit the limit. Use a value equal to or smaller than 0 to not set any limits. The limit will be the available memory.

By default no limit is set, i.e. a value of 0.

purge (bool)

Whether to automatically remove the oldest files if the filesystem is full.

By default this value is set to false.

RTMP

A simple RTMP server for publishing and playing streams

The settings for the built-in SRT server. Check out our for more information.

Configuration

enable (bool)

Set this value to true in order to enable the built-in RTMP server.

By default the RTMP server is disabled.

enable_tls (bool)

Set this value to true to enable the RTMPS server that will run in parallel with the RTMP server on a different port. You have to have set to true in order for enabling the RTMPS server because it will use the same certificate as for the HTTPS server.

By default TLS is disabled.

address (string)

If the RTMP server is enabled, it will listen on this address. The default address is :1935.

The default :1935 will listen on all interfaces on port 1935. To use a specific interface, write additionally it's IP, e.g. 127.0.0.1:1935 to only listen on the loopback interface.

address_tls (string)

If the RTMPS server is enabled, it will listen on this address. The default address is :1936.

The default :1936 will listen on all interfaces on port 1936. To use a specific interface, write additionally it's IP, e.g. 127.0.0.1:1936 to only listen on the loopback interface.

app (string)

Define the app a stream can be published on, e.g. /live to require the path in an RTMP URLs to start with /live.

The default app is /.

token (string)

To prevent anybody from publish or playing streams, define token to be a secret only known to the publishers and subscribers. The token has to be put in the query of the stream URL, e.g. /live/stream?token=abc123.

As of version 16.12.0 the token can be appended to the path instead of a query parameter, e.g. /live/stream/abc123. With this the token corresponds to a stream key.

By default the token is not set (i.e. an empty string).

SRT

A simple SRT server for publishing and playing streams

The settings for the built-in SRT server. Check out our for more information.

Configuration

enable (bool)

Set this value to true in order to enable the built-in SRT server.

By default the SRT server is disabled.

address (string)

If the SRT server is enabled, it will listen on this address. The default address is :6000.

The default :6000 will listen on all interfaces on port 6000. To use a specific interface, write additionally it's IP, e.g. 127.0.0.1:6000 to only listen on the loopback interface.

passphrase (string)

Define a passphrase in order to enable SRT encryption. If the passphrase is set it is required and applies to all connections.

By default the passphrase is not set (i.e. an empty string).

token (string)

The token is an arbitrary string that needs to be comunicated in the streamid. Only with a valid token it is possible to publish or request streams. If the token is not set, anybody could publish and request streams.

By default the token is not set (i.e. an empty string).

log.enable (bool)

Set this value to true in order to enable logging for the SRT server. This will log events on the SRT protocol level. You have to provide the topics you are interested in, otherwise nothing will be logged.

By default the logging is disabled.

log.topic (array)

Logging topics allow you to define what type of messages will be logged. This is practical if you want to debug a SRT connection. An empty list of topics means that no topics will be logged.

Find a list of known logging topics on the .

By default no topics are logged (i.e. an empty array).

Sessions

Settings for session capturing. Sessions for HLS, RTMP, SRT, HTTP, and FFmpeg are captured.

Configurations

enable (bool)

Set this value to true in order to enable session capturing.

By default this value is set to true.

ip_ignorelist (array)

List of IP ranges in CIDR notation to ignore for session capturing. If either end of a connection falls into this list of IPs, the session will not be captured. For the environment variable provide a comma-separated list of IP ranges in CIDR notation.

By default this value is set to ["127.0.0.1/32","::1/128"].

session_timeout_sec (integer)

The timeout in seconds for an idle session. After this timeout the session is considered as closed. Applies only to HTTP and HLS sessions.

By default this value is set to 30 seconds.

persist (bool)

Whether to persist the session history. The session history is stored as sessions.json in . If the session history is not persisted it will be kept only in memory.

By default the session history is not persisted, i.e. this value is set to false.

persist_interval_sec (integer)

Interval in seconds in which to persist the current session history. This setting has only effect if persisting the session history is enabled.

By default this value is set to 300 seconds.

max_bitrate_mbit (unsigned integer)

The maximum allowed outgoing bitrate in mbit/s. If the limit is reached, any new HLS sessions will be rejected. A value of 0 means no limitation of the outgoing bitrate.

By default this is value is set to 0, i.e. unlimited.

max_sessions (unsigned integer)

The maximum allowed number of simultaneous sessions. If the limit is reached, any new HLS sessions will be rejected. A value of 0 means no limitation of the number of sessions.

By default this value is set to 0, i.e. unlimited.

Metrics

Settings for collecting metrics of the core and FFmpeg processes.

Configuration

Caution with many processes and low values! It will increases CPU and RAM usage.

enable (bool)

Enable collecting metrics data of the datarhei Core itself and the FFmpeg processes. The metrics can be queried via the .

By default collecting the metrics is disabled.

enable_prometheus (bool)

Enable prometheus endpoint at /metrics. This requires that collecting metrics is enabled.

By default this is disabled.

range_sec (integer)

Define for how many seconds historic metrics data should be kept.

By default this value is set to 300.

interval_sec (integer)

Define in which interval (in seconds) the metrics should be collected.

By default this value is set to 2.

Router

Settings for static HTTP routes.

Configuration

blocked_prefixes (array)

List of path prefixes that are not allowed to be overwritten by a static route. If a static route would overwrite one of the blocked prefixes, an error will be thrown at startup. For the environment variable, provide a comma-separated list of prefixes, e.g. /prefix1,/prefix2.

By default this value is set to ["/api"].

routes (map)

A list of static routes. This maps a path to a different path and results in a HTTP redirect, e.g. {"/foo.txt": "/bar.txt"} will redirect requests from /foo.txt to /bar.txt. Path have to start with a / and they are based on on the filesystem.

The special suffix /* of a route allows you to serve whole directories from another root than storage.disk.dir, e.g. {"/ui/*", "/path/to/ui"}. If you use a relative path as target, then it will be added to the current working directory.

By default no routes are defined.

ui_path (string)

A path to a directory holding UI files. This will be mounted as /ui.

By default this value is not set, i.e. an empty string.

Debug

The debugging settings can help to find and solve issues with the datarhei Core.

Configuration

profiling (bool)

By setting this to true, the endpoint /profiling will be established where you can access different diagnostic solutions as described in .

By default this setting is set to false.

force_gc (integer)

Golang is usually quite greedy when it comes to claim memory to itself. This settings lets you define the number of seconds between forcing the garbage collector to run in order return memory to the OS. If this is not set, the runtime will decide on its own when to run the garbage collector.

Alternatively, you can set the environment variable GOMEMLIMIT to a value in bytes in order to set a soft memory limit. This will influence the garbage colltector when the consumed memory comes close to this limit. If you use the GOMEMLIMIT environment variable you are advised to leave the force_gc option disabled.

The default for this setting is 0 (i.e. disabled).

memory_limit_mbytes (integer)

As of version 16.12.0 use this valuess to impose a soft limit to the memory consumption of the Core application itselft (i.e. with out the memory consumption of the ffmpeg processes). This has the same effect as setting the GOMEMLIMIT environment variable.

The provided value is the number of megabytes the Core application is allowed to consumed.

The default for this setting is 0 (i.e. no limit).

API Clients

Golang

Example

Repository

Python3

Install

Example

Repository

Filesystems

The datarhei Core provides two filesystem abstractions that you can use in your FFmpeg process configurations.

Disk

The disk filesystem is the directory you defined in the configuration at . Any FFmpeg command will be restricted to this directory (or its subdirectories) for file access (read or write). One exception is /dev in order to access, e.g. USB cameras.

In a process configuration you can use the , such that you don't need to remember and write the configured path.

The contents of the disk filesystem are accessible read-only via the / path of the datarhei core HTTP server.

In order to access and modify the contents of the filesystem programmatically, you can use the corresponding .

Memory

The datarhei Core has built-in memory filesystem. It is enabled by default and it is only accessible via HTTP. Its contents can be accessed via the /memfs path of the datarhei Core HTTP server.

In the configuration in the section you can define different aspects of the memory file system, such as the maximum size of the filesystem, the maximum file size, password protection, and so on.

In a process configuration you can use the , such that you don't need to remember and write the whole base URL.

The /memfs path is not read-only. Write access is protected via HTTP BasicAuth.

In order to access and modify the contents of the filesystem programmatically, you can use the corresponding .

S3

The datarhei Core allows you to mount a S3 compatible filesystem. It is only accessible via HTTP. Its contents can be accessed via the configured path of the datarhei Core HTTP server.

In the configuration in the section you can define different aspects of the S3 filesystem, such as the login credentials, bucket name, and so on.

In a process configuration you can use the , such that you don't need to remember and write the whole base URL, where [name] is the configured name of the S3 filesystem, e.g. {fs:aws}.

The S3 filesystem HTTP mountpoint is not read-only. Write access is protected via HTTP BasicAuth.

In order to access and modify the contents of the filesystem programmatically, you can use the corresponding .

SRT

The datarhei Core includes a simple SRT server for publishing and playing streams. It is not enabled by default. You have to enable it in the config in the or via the corresponding environment variables.

SRT is a modern live streaming protocol with a low latency and network failure tolerance. .

The SRT server supports publishing and requesting streams, similar to an RTMP server. With your SRT client you have to connect to the SRT server always in caller mode and live transmission mode.

Example:

Passphrase

If a passphrase is set in the config (or via environment variable), you have to provide the passphrase in the SRT URL. Example SRT URL with the passphrase foobarfoobar:

StreamID

In order to define whether you want to publish or request a resource, you have to provide your intent in the streamid.

The streamid is formatted as follows:

The resource is the name of the stream. This can be anything. You can publish only one stream with that same name.

The mode is either request or publish. If you don't provide a mode, request will be assumed. You can only request resources that are currently publishing. You can only publish resources that are not already publishing.

The token is the one defined in the config (see ). If no token is configured, you can omit the token in the streamid.

Examples

Publishing the resource 12345 with the token foobar:

Publishing the resource 12345 with no token defined in the configuration:

Requesting the resource 12345 with no token defined in the configuration:

Requesting the resource 12345 with the token foobar:

The whole SRT URL might look like this for the last example:

API

Via the in the API you can gather statistics about the currently connected SRT clients.

Log

Get the last log lines of the Core application.

The last log events are kept in memory and are accessible via the /api/v3/log endpoint. You can either retrieve the log lines in the format the Core is writing them to the console (?format=console) or in raw format (?format=raw). By default they are returned in "console" format.

You can define the number of last log lines kept in memory by either setting an appropriate value in the config (log.max_lines) or via an enviroment variable (CORE_LOG_MAX_LINES).

Read

Example:

The GoClient always returns the logs in "raw" format.

Description:

RTMP

The datarhei Core includes a simple RTMP server for publishing and playing streams. Check out the and the . This API endpoint will list the names of all currently publishing streams.

Architecture

Data flows

A/V Processing

  1. Core launches and monitors FFmpeg processes

  2. FFmpeg can use HTTP, RTMP, and SRT services as streaming backends for processing incoming and outgoing A/V content.

  3. Several storage locations are available for the HTTP service: In-memory file system, aka MemFS (very fast without disk I/O.) Disk file system, aka DiskFS, for storage on the HDD/SSD of the host system.

  4. Optionally, FFmpeg can access host system devices such as GPU and USB interfaces (requires FFmpeg built-in support).

FFmpeg can also use external input and output URLs.

RTMP guide
{
   "rtmp": {
      "enable": false,
      "enable_tls": false,
      "address": ":1935",
      "address_tls": ":1936",
      "app": "/",
      "token": ""
   }
}
CORE_RTMP_ENABLE=false
CORE_RTMP_ENABLE_TLS=false
CORE_RTMP_ADDRESS=":1935"
CORE_RTMP_ADDRESS_TLS=":1936"
CORE_RTMP_APP="/"
CORE_RTMP_TOKEN=
tls.enable
{
   "sessions": {
      "enable": true,
      "ip_ignorelist": [
         "127.0.0.1/32",
         "::1/128"
      ],
      "session_timeout_sec": 30,
      "persist": false,
      "persist_interval_sec": 300,
      "max_bitrate_mbit": 0,
      "max_sessions": 0
   }
}
CORE_SESSIONS_ENABLE=false
CORE_SESSIONS_IP_IGNORELIST="127.0.0.1/32,::1/128"
CORE_SESSIONS_SESSION_TIMEOUT_SEC=30
CORE_SESSIONS_PERSIST=false
CORE_SESSIONS_PERSIST_INTERVAL_SEC=300
CORE_SESSIONS_MAXBITRATE_MBIT=0
CORE_SESSIONS_MAXSESSIONS=0
db.dir
{
   "metrics": {
      "enable": false,
      "enable_prometheus": false,
      "range_sec": 300,
      "interval_sec": 2
   }
}
CORE_METRICS_ENABLE=false
CORE_METRICS_ENABLE_PROMETHEUS=false
CORE_METRICS_RANGE_SECONDS=300
CORE_METRICS_INTERVAL_SECONDS=2
metrics API endpoint
{
   "router": {
      "blocked_prefixes": [
         "/api"
      ],
      "routes": {},
      "ui_path": ""
   }
}
CORE_ROUTER_BLOCKED_PREFIXES="/api"
CORE_ROUTER_ROUTES=
CORE_ROUTER_UI_PATH=
storage.disk.dir
{
   "debug": {
      "profiling": false,
      "force_gc": 0,
      "memory_limit_mbytes": 0,
   }
}
CORE_DEBUG_PROFILING=false
CORE_DEBUG_FORCE_GC=0
CORE_DEBUG_MEMORY_LIMIT_MBYTES=0
https://go.dev/doc/diagnostics
storage.disk.dir
{diskfs} placeholder
API endpoints
storage.memory
{memfs} placeholder
API endpoints
storage.s3
{fs:[name]} placeholder
API endpoints
srt://127.0.0.1:6000?mode=caller&transmode=live&streamid=...
srt://127.0.0.1:6000?mode=caller&transmode=live&streamid=...&passphrase=foobarfoobar
[resource],mode:[request|publish],token:[token]
12345,mode:publish,token:foobar
12345,mode:publish
12345
12345,token:foobar
srt://127.0.0.1:6000?mode=caller&transmode=live&streamid=12345,token:foobar
srt section
Read more
srt.token
SRT endpoint

API Security

These are the settings for securing the API from unwanted access.

Configuration

{
   "api": {
      "read_only": false,
      "access": {
         "http": {
            "allow": [],
            "block": []
         },
         "https": {
            "allow": [],
            "block": []
         }
      },
      "auth": {
         "enable": fals,
         "disable_localhost": false,
         "username": "",
         "password": "",
         "jwt": {
            "secret": ""
         },
         "auth0": {
            "enable": false,
            "tenants": []
         }
      }
   }
}
CORE_API_READ_ONLY=false
CORE_API_ACCESS_HTTP_ALLOW="127.0.0.1/32,::1/128"
CORE_API_ACCESS_HTTP_BLOCK="127.0.0.1/32,::1/128"
CORE_API_ACCESS_HTTPS_ALLOW="127.0.0.1/32,::1/128"
CORE_API_ACCESS_HTTPS_DENY="127.0.0.1/32,::1/128"
CORE_API_AUTH_ENABLE=true
CORE_API_AUTH_DISABLE_LOCALHOST=true
CORE_API_AUTH_USERNAME=
CORE_API_AUTH_PASSWORD=
CORE_API_AUTH_JWT_SECRET=mySecret
CORE_API_AUTH_AUTH0_ENABLE=false
CORE_API_AUTH_AUTH0_TENANTS=

read_only (bool)

Set this value to true in order to allow only ready access to the API. All API endpoints for writing will not be mounted.

By default this value is set to false.

access.http.allow (array)

A list of IPs that are allowed to access the API via HTTP. Each entry has to be an IP range in CIDR notation, e.g. ["127.0.0.1/32","::1/128"]. Provide the list as comma-separated values for the environment variable, e.g. 127.0.0.1/32,::1/128. If the list is empty, then all IPs are allowed. If the list contains any invalid IP range, the server will refuse to start.

By default the list is empty.

access.http.block (array)

A list of IPs that are not allowed to access the API via HTTP. Each entry has to be an IP range in CIDR notation. Provide the list as comma-separated values for the environment variable. If the list is empty, then no IPs will be blocked. If the list contains any invalid IP range, the server will refuse to start.

By default the list is empty.

access.https.allow (array)

A list of IPs that are allowed to access the API via HTTPS. Each entry has to be an IP range in CIDR notation. Provide the list as comma-separated values for the environment variable. If the list is empty, then all IPs are allowed. If the list contains any invalid IP range, the server will refuse to start.

By default the list is empty.

access.https.block (array)

A list of IPs that are not allowed to access the API via HTTPS. Each entry has to be an IP range in CIDR notation. Provide the list as comma-separated values for the environment variable. If the list is empty, then no IPs will be blocked. If the list contains any invalid IP range, the server will refuse to start.

By default the list is empty.

auth.enable (bool)

Set this value to true to enable JWT authentication for the API. If it is enabled, you have to provide a username and password. The username and password will be sent to the /api/login endpoint in order to obtain an access and refresh JWT.

It is strongly recommended to enable authentication for the API in order to prevent access from unwanted parties.

By default this value is set to false.

auth.disable_localhost (bool)

Set this value to true in order to allow unprotected access from localhost.

Be default this value is set to false.

auth.username (string)

The username for JWT authentication. If JWT authentication is enabled, a username must be defined.

By default this value is empty, i.e. no username defined.

auth.password (string)

The password for JWT authentication. If JWT authentication is enabled, a password must be defined.

By default this value is empty, i.e. no password defined.

auth.jwt.secret (string)

A secret for signing the JWT. If you leave this value empty, a random secret will be generated for you.

By default this value is empty.

auth.auth0.enable (bool)

Set this value to true in order to enable API auth0 protection. With this a valid Auth0 access JWT can be used instead of a username/password in order to obtain the access and refresh JWT. Additionally you have to provide a list of tenants and their users to validate the Auth0 access JWT against.

By default this value is set to false.

auth.auth0.tenants (array)

A list of allowed tenants and their users. A tenant is a JSON object:

{
    "domain": "",
    "audience": "",
    "clientid": "",
    "users": [],
}

You can obtain the domain, audience, and clientid from your Auth0 account. You also have to provide a list of allowed users that are member of that tenant.

For providing the list of tenants and their users as an environement variable you have to provide a comma-separated list of base64 encoded tenant JSON objects.

As of version 16.12.0 there's a different syntax available for providing the tenants as environment variable. A list of comma separated URLs of this form:

auth0://[clientid]@[domain]?aud=[audience]&user=...&user=...

By default this list is empty.

text/plain  .txt
text/html   .htm .html
...
{
   "storage": {
     "mimetypes_file": "./mime.types",
     "cors": {
         "origins": [
            "*"
         ]
     },
     "disk": {...},
     "memory": {...},
     "s3": []
   }
}
CORE_STORAGE_MIMETYPES_FILE="./mime.types"
CORE_STORAGE_CORS_ORIGINS="*"
Disk
In-memory
S3
Beginner
License: MIT
docker run {...params} datarhei/core:latest
docker run {...params} --privileged datarhei/core:rpi-latest
docker run {...params} --runtime=nvidia datarhei/core:cuda-latest
docker run {...params} --volume /dev/dri:/dev/dri datarhei/core:vaapi-latest
docker run --detach --name core \
    --privileged \
    --security-opt seccomp=unconfined \
    --env CORE_API_AUTH_ENABLE=true \
    --env CORE_API_AUTH_USERNAME=admin \
    --env CORE_API_AUTH_PASSWORD=datarhei \
    --publish 1935:1935 \
    --publish 1936:1936 \
    --publish 6000:6000/udp \
    --publish 8080:8080 \
    --publish 8181:8181 \
    --volume ${PWD}/core/config:/core/config \
    --volume ${PWD}/core/data:/core/data \
        datarhei/core:latest
docker run --detach --name core --privileged --security-opt seccomp=unconfined --env CORE_API_AUTH_ENABLE=true --env CORE_API_AUTH_USERNAME=admin --env CORE_API_AUTH_PASSWORD=datarhei --publish 1935:1935 --publish 1936:1936 --publish 6000:6000/udp --publish 8080:8080 --publish 8181:8181 --volume %USERPROFILE%/core/config:/core/config --volume %USERPROFILE%/core/data:/core/data datarhei/core:latest
docker ... 
    --volume $HOST_DIR:$CORE_DIR \
    ...
--volume ${PWD}/core/config:/core/config
--volume ${PWD}/core/data:/core/data
docker ... 
    --publish $HOST_PORT:$CORE_PORT \
    ...
--publish 8080:8080
--publish 8181:8181
--publish 1935:1935
--publish 1936:1936
--publish 6000:6000/udp
docker ... 
    --net=host \
    ...
docker ... 
    --env CORE_API_AUTH_ENABLE=true \
    --env CORE_API_AUTH_USERNAME=admin \
    --env CORE_API_AUTH_PASSWORD=datarhei \
    ...
--privileged
--security-opt seccomp=unconfined
docker run --detach --name core {params ...} {image}
docker stop core
docker kill core
docker rm core
docker pull {image}
docker kill core
docker rm core
docker run {params ...} {image}
docker top core
docker logs core -f
/etc/system/systemd/core.service
[Unit]
Description=datarhei Core
After=docker.service
Requires=docker.service
 
[Service]
TimeoutStartSec=0
Restart=always
ExecStart=/usr/bin/docker run \
  --rm --name core --privileged \
  --security-opt seccomp=unconfined \
  -v /opt/core/config:/core/config \
  -v /opt/core/data:/core/data \
  -p 8080:8080 -p 8181:8181 \
  -p 1935:1935 -p 1936:1936 \
  -p 6000:6000/udp \
  datarhei/core:latest
ExecStop=/usr/bin/docker kill core

[Install]
WantedBy=multi-user.target
systemctl daemon-reload && systemctl enable core.service
systemctl disable core.service
systemctl start core.service
systemctl stop core.service
docker pull datarhei/core:latest && \
systemctl restart core.service
systemctl status core.service
journald -u core.service -f
OCI
podman
Buildah
containerd
LXC
kaniko
Configuration
Configuration
Beginner
import "github.com/datarhei/core-client-go/v16"

client, err := coreclient.New(coreclient.Config{
    Address: "https://example.com:8080",
    Username: "foo",
    Password: "bar",
})
if err != nil {
    ...
}

processes, err := client.ProcessList(coreclient.ProcessListOptions{})
if err != nil {
    ...
}
pip install https://github.com/datarhei/core-client-python/archive/refs/heads/main.tar.gz
from core_client import Client

client = Client(
    base_url="https://example.com:8080",
    username="foo",
    password="bar"
)
client.login()

process_list = client.v3_process_get_list()
for process in processes:
    print(process.id)
import asyncio
from core_client import AsyncClient

client = AsyncClient(
    base_url="https://example.com:8080",
    username="foo",
    password="bar"
)
client.login()

process_list = await client.v3_process_get_list()
for process in processes:
    print(process.id)
RTMP configuration
RTMP guide
CORE_STORAGE_MEMORY_AUTH_ENABLE=true
CORE_STORAGE_MEMORY_AUTH_USERNAME=admin
CORE_STORAGE_MEMORY_AUTH_PASSWORD=datarhei
CORE_STORAGE_MEMORY_MAXSIZEMBYTES=0
CORE_STORAGE_MEMORY_PURGE=false
SRT guide
CORE_SRT_ENABLE=false
CORE_SRT_ADDRESS=":6000"
CORE_SRT_PASSPHRASE=
CORE_SRT_TOKEN=
CORE_SRT_LOG_ENABLE=false
CORE_SRT_LOG_TOPIC=
SRT server project page

Command

Send a command to a process

There are basically two commands you can give to a process: start or stop. This is the order for the process.

Additionally to these two commands are the commands restart which is sending a stop followed by a start command packed in one command, and reload which is the same as if you would update the process config with itself, e.g. in order to update references to another process.

  • start the process. If the process is already started, this won't have any effect.

  • stop the process. If the process is already stopped, this won't have any effect.

  • restart the process. If the process is not running, this won't have any effect.

  • reload the process. If the process was running, the reloaded process will start automatically.

Example:

Description:

Benchmark

Raspberry Pi 4

Date: 12.02.2021 Version: 2.1.0 (non-public release)

Goal: Bandwidth Limitation Test (HLS Sessions)

Settings

ENV

CORE_SESSION_BANDWIDTH_LIMIT=800
sysctl -w net.core.netdev_budget_usecs=60000
sysctl -w net.core.netdev_budget=900
sysctl -w net.core.rmem_max=26214400
sysctl -w net.core.wmem_max=2621440
echo "6" > /proc/irq/33/smp_affinity
echo "1-2" > /proc/irq/33/smp_affinity_list
echo c > /sys/class/net/eth0/queues/rx-0/rps_cpus
echo c > /sys/class/net/eth1/queues/rx-0/rps_cpus
echo c > /sys/class/net/eth0/queues/tx-0/xps_cpus
echo c > /sys/class/net/eth0/queues/tx-1/xps_cpus
echo c > /sys/class/net/eth0/queues/tx-2/xps_cpus
echo c > /sys/class/net/eth0/queues/tx-3/xps_cpus
echo c > /sys/class/net/eth0/queues/tx-4/xps_cpus
/sbin/ethtool -K eth0 rx-checksum on
/sbin/ethtool -K eth1 rx-checksum on

Knowledge

Results

1x H.264 Encoding (1280x720, 1 Mbit/s)

Active HLS-Sessions: 696

1x H.264 Encoding (1280x720, 2 Mbit/s)

Active HLS-Sessions: 350

1x H.264 Encoding (1280x720, 4 Mbit/s)

Active HLS-Sessions: 184

The limitation was the network card, not the CPU, memory, or application.

Disk

The settings for the disk filesystem. This filesystem is accessible on / via HTTP. This filesystem can only be accessed for reading via HTTP. Writing to and deleting from the filesystem is possible .

Configuration

dir (string)

Path to a directory on disk. It will be exposed on / for reading.

Relative paths are interpreted relative to where the datarhei Core binary is executed.

By default it is set to ./data.

max_size_mbytes (unsigned integer)

The maximum amount of data that is allowed to be stored in this filesystem. The value is interpreted as megabytes. A 507 Insufficient Storage will be returned if you hit the limit. Use a value equal to or smaller than 0 to not impose any limits. Then all the available space on the disk is the limit.

By default no limit is set, i.e. a value of 0.

cache.enable (bool)

Set this value to true in order to enable the cache for the disk. The cache is an LRU cache, i.e. the least recently accessed files in the cache will be removed in case the cache is full and a new file wants to be added.

By default the value is set to true.

cache.max_size_mbytes (unsigned integer)

Limit for the size of the cache in megabytes. A value of 0 doesn't impose any limit. The limit will be the available memory.

By default no limit is set, i.e. a value of 0.

cache.ttl_seconds (integer)

Number of seconds to keep a file in the cache.

By default this value is set to 300 seconds.

cache.max_file_size_mbytes (unsigned integer)

Limit for the size of a file to be allowed to be put in the cache in megabytes.

By default this value is set to 1 megabyte.

cache.types.allow (array)

A list of file extensions to cache, e.g. [".ts", ".mp4"]. Leave the list empty in order to cache all files. Use a space-separated list of extensions for the environment variable, e.g. .ts .mp4.

By default the list is empty.

cache.types.block (array)

A list of file extensions not to cache, e.g. [".m3u8", ".mpd"]. Leave the list empty in order to block no extension. Use a space-separated list of extensions for the environment variable, e.g. .m3u8 .mpd.

By default the manifest files for HLS and DASH are blocked from caching, i.e. [".m3u8", ".mpd"].

Sessions

Sessions are tracking user actions regarding pushing and pulling video data to and from the core. There are different kind of sessions that are captured:

  • HLS Sessions (hls collector) How many users are watching an HLS stream

  • HLS Ingress Sessions (hlsingress collector) How many users are publishing a stream via the in-memory filesystem

  • HTTP Sessions (http collector) How many users are reading or writing http data via the build-in HTTP server

  • RTMP Sessions (rtmp collector) How many users are publishing or watching a stream via the built-in RTMP server

  • SRT Sessions (srt collector) How many users are publishing or watching a stream via the built-in SRT server

  • FFmpeg Sessions (ffmpeg collector) How many streams is FFmpeg using as an input or as an output

The data for each session include the current ingress and egress bandwidth, the total amount of data, an identifier for the session itself, the local end of the stream and the remote end of the stream.

The following API endpoints allow to extract information about the currently active sessions or additionally a summary of the already finished sessions. Each endpoint requires a comma-separated list of collectors as query parameter collectors.

Active sessions with summary

Example:

Description:

Active sessions

Example:

Description:

via the API
{
   "storage": {
      "disk": {
         "dir": "./data",
         "max_size_mbytes": 0,
         "cache": {
            "enable": true,
            "max_size_mbytes": 0,
            "ttl_seconds": 300,
            "max_file_size_mbytes": 1,
            "types": {
               "allow": [],
               "block": []
            }
         }
      }
   }
}
CORE_STORAGE_DISK_DIR="./data"
CORE_STORAGE_DISK_MAXSIZEMBYTES=0
CORE_STORAGE_DISK_CACHE_ENABLE=true
CORE_STORAGE_DISK_CACHE_MAXSIZEMBYTES=0
CORE_STORAGE_DISK_CACHE_TTLSECONDS=300
CORE_STORAGE_DISK_CACHE_MAXFILESIZEMBYTES=1
CORE_STORAGE_DISK_CACHE_TYPES_ALLOW=
CORE_STORAGE_DISK_CACHE_TYPES_BLOCK=".m3u8 .mpd"

S3

The settings for the S3 filesystem. This filesystem is accessible on the configured path via HTTP. This filesystem can only be accessed via HTTP. Writing to and deleting from the filesystem can be restricted by HTTP basic auth.

Any S3 compatible service can be used, e.g. Amazon, Minio, Backblaze, ...

Available as of version 16.12.0

Configuration

{
    "storage": {
        "s3": [
            {
                "name": "aws",
                "mountpoint": "/aws",
                "auth": {
                    "enable": true,
                    "username": "foo",
                    "password": "bar"
                },
                "endpoint": "...",
                "access_key_id": "...,
                "secret_access_key": "...",
                "bucket": "...",
                "region": "...",
                "use_ssl": true,
            },
            ...
        ]
    }
}
CORE_STORAGE_S3=https://access_key_id:[email protected]/bucket?name=aaa&mount=/abc&username=xxx&password=yyy

name (string)

The name for this storage. The name will be used in placeholders, e.g. {fs:aws}, and for accessing the filesystem via the api, e.g. /api/v3/fs/aws. The name memfs is reserved for the in-memory filesystem.

By default this value is not set, but is required.

mountpoint (string)

The path where the filesystem will be mounted in the HTTP server. It needs to be an absolute path. A mountpoint is required.

By default this value is not set, but is required.

auth.enable (bool)

Set this value to true in order to enable basic auth for PUT, POST, and DELETE operations on the configured mountpoint. Read access (GET, HEAD) is not restricted. If enabled, you have to define a username and a password.

It is highly recommended to enable basic auth for write operations on the mountpoint.

By default this value is set to false.

auth.username (string)

Username for Basic-Auth of the configured mountpoint. This has to be set if basic auth is enabled.

By default this value is not set, i.e. an empty string.

auth.password (string)

Password for Basic-Auth of the configured mountpoint. This has to be set if basic auth is enabled.

By default this value is not set, i.e. an empty string.

endpoint (string)

The endpoint for the S3 storage. For Amazon AWS S3 it would be e.g. s3.amazonaws.com. Ask your S3 storage provider to the necessary credentials.

By default this value is not set, i.e. an empty string.

access_key_id (string)

Your access key ID for the S3 storage. Ask your S3 storage provider to the necessary credentials.

By default this value is not set, i.e. an empty string.

secret_access_key (string)

Your secret access key for the S3 storage. Ask your S3 storage provider to the necessary credentials.

By default this value is not set, i.e. an empty string.

bucket (string)

The name of the bucket you want to use. If the bucket does not exist already it will be created.

By default this value is not set, i.e. an empty string.

region (string)

Identifier for the region the storage is located, e.g. eu, us-west1, ... . If your S3 storage provider doesn't support regions, leave this field empty.

By default this value is not set, i.e. an empty string.

use_ssl (bool)

Whether to use HTTPS or HTTP. It is strongly recommended to enable this setting.

By default this is set to false.

from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

core_ping = client.ping()
print(core_ping)
{
   "storage": {
      "memory": {
         "auth": {
            "enable": true,
            "username": "admin",
            "password": "datarhei"
         },
         "max_size_mbytes": 0,
         "purge": false
      }
   }
}
{
   "srt": {
      "enable": false,
      "address": ":6000",
      "passphrase": "",
      "token": "",
      "log": {
         "enable": false,
         "topics": []
      }
   }
}
curl http://127.0.0.1:8080/api/v3/session/active?collectors=hls,rtmp \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X GET
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

core_active_sessions = client.v3_session_get_active(
    collectors="hls,rtmp"
)
print(core_active_sessions)
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

sessions, err := client.SessionsActive([]string{"hls", "rtmp"})
fmt.Printf("%+v\n", sessions)
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

core_sessions = client.v3_session_get(
    collectors="hls,rtmp"
)
print(core_sessions)
curl http://127.0.0.1:8080/api/v3/session?collectors=hls,rtmp \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X GET
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

sessions, err := client.Sessions([]string{"hls", "rtmp"})
fmt.Printf("%+v\n", sessions)
CodeQL
tests
codecov

Configuration

Location

You have to provide the location of the config file by setting the environment variable CORE_CONFIGFILE to path to the config file. Example:

export CORE_CONFIGFILE=./config/config.json

The config file is written in JSON format.

If the config file doesn't exist yet, it will be created and its fields will be filled with their default values.

If the config file is partially complete or of an older version, it will be upgraded and the missing fields will be filled with their default values.

If you don't provide the CORE_CONFIGFILE environment variable, the default config values will be used and the configuration will not be persisted to the disk.

As of version 16.12.0:

If no path is given in the environment variable CORE_CONFIGFILE, different standard locations will be probed:

  • os.UserConfigDir() + /datarhei-core/config.js

  • os.UserHomeDir() + /.config/datarhei-core/config.js

  • ./config/config.js

If the config.js doesn't exist in any of these locations, it will be assumed at ./config/config.js

A minimal valid config file must contain at least the config version:

{
    "version": 3
}

Configuration

Configuration values can be changed by either editing the config file directly, or via the JSON API (API for short) or via environment variables (ENV for short). All environment variables have the prefix CORE_ followed by the JSON names in uppercase. Example:

{
   "version": 3,
   "id": "1",
   "name": "super-core-1337",
   "address": ":8080",
   "log": {
      "level": "warn"
   }
}
CORE_ID=1
CORE_NAME=super-core-1337
CORE_ADDRESS=:8080
CORE_LOG_LEVEL=warn

Following, every field of the configuration file will be described in detail:

id (string)

ID of the Core. If not set, a UUIDv4 will be generated. Default: unset

name (string)

Human-readable name of the Core. If not set a custom name will be generated. Default: unset

address (string)

HTTP listening address. Default: :8080

The default :8080 will listen on all interfaces on port 8080. To use a specific interface, write additionally it's IP, e.g. 127.0.0.1:8080 to only listen on the loopback interface.

log

Log settings.

db

Database (processes, metadata, ...) endpoint.

host

Configuration to detect or set the host-/domainname.

api

API Security options.

tls

TLS/HTTPS settings (also required for RTMPS).

storage

General configuration, DiskFS, MemFS, and S3.

rtmp

RTMP server for publishing and playing streams.

srt

SRT server for publishing and playing streams.

ffmpeg

General FFmpeg settings.

session

HLS-/MPEG-DASH session management and bandwidth limitations.

metrics

General metrics settings.

route

HTTP/S route configuration (e.g., to inject UI's).

debug

Core / Golang debugging options.

update_check (bool)

All about datarhei Update-Checks and data tracking.

{
   "update_check": true,
   "service": {
      "url": "https://service.datarhei.com"
   }
}

CORE_UPDATE_CHECK=true

CORE_SERVICE_URL=https://service.datarhei.com

Check for updates and send anonymized data (default: false). Requires service.url.

IP addresses are anonymized and stored for 30 days on servers in the EU.

service.url (string)

URL for the update_check Service API. Default: https://service.datarhei.com

About anonymizied data:

We receive: id, os architecture, uptime, process stats (total: running, failed, killed), viewer count

The data is used exclusively for the further development of the products and error detection. Domains/IP addresses, companies, and persons remain anonymous.

Metadata

Store metadata in a process

The metadata for a process allows you to store arbitrary JSON data with that process, e.g. if you have an app that uses the datarhei Core API you can store app-specific information in the metadata.

In order not to conflict with other apps that might write to the metadata as well, you have to provide a key under which your metadata is stored. Think of a namespace or similar.

Create, Update

Add or update the metadata for a process and key.

Example: Write a metadata object into the metadata for the process test and key desc.

curl http://127.0.0.1:8080/api/v3/process/test/metadata/desc \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X POST \
   -d '{
         "title": "My title",
         "description": "My description."
      }'
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

client.v3_process_put_metadata(
    id="test",
    key="desc",
    data={
        "title": "My title",
        "description": "My description."
    }
)
import (
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

data := map[string]string{
    "title": "My title",
    "description": "My description.",
}

err := client.ProcessMetadataSet("test", "desc", data)
if err != nil {
    ...
}

Description:

Read

Read out the stored metadata. The key is optional. If the key is not provided, all stored metadata for that process will be in the result.

Example: read the metadata object for process test that is stored under the key desc.

curl http://127.0.0.1:8080/api/v3/process/test/metadata/desc \
   -H 'accept: application/json' \
   -X GET
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

client.v3_process_get_metadata(
    id="test",
    key="desc"
)
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

metadata, err := client.ProcessMetadata("test", "desc")
if err != nil {
    ...
}

fmt.Printf("%+v\n", metadata)

Description:

from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

core_log = client.v3_log_get(
    format="console"
)
print(core_log)
curl http://127.0.0.1:8080/api/v3/log \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X GET
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

events, err := client.Log()

for _, e := range events {
    fmt.Printf("%+v\n", e)
}
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

core_rtmp = client.v3_rtmp_get()
print(core_rtmp)
curl http://127.0.0.1:8080/api/v3/rtmp \
   -H 'accept: application/json' \
   -X GET
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

channels, err := client.RTMPChannels()
fmt.Printf("%+v\n", channels)
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

client.v3_process_put_command(
    id="test",
    command="stop"
)
curl http://127.0.0.1:8080/api/v3/process/test \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X PUT \
   -d '{
         "command": "stop"
      }'
import (
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

err := client.ProcessCommand("test", "stop")
if err != nil {
    ...
}

TLS / HTTPS

Enable TLS / HTTPS support

These settings are for configuring the TLS / HTTPS support for datarhei Core.

Configuration

address (string)

If TLS is enabled, the HTTPS server will listen on this address. The default address is :8181.

The default :8181 will listen on all interfaces on port 8181. To use a specific interface, write additionally it's IP, e.g. 127.0.0.1:8181 to only listen on the loopback interface.

enable (bool)

Set this value to true in order to enable TLS / HTTPS support. If enabled you have to either provide your own certificate (see and ) or enable automatic certificate from Let's Encrypt (see ).

If TLS is enabled, a HTTP server listening on will be additionally started. This server provides access to everything as the HTTPS server, additionally it will allow ACME http-1 challenges in case Let's Encrypt (auto) certificates are enabled.

By default this is set to false.

auto (bool)

Enable automatic certificate generation from Let's Encrypt. This only works if enable is set to true and at least one public hostname is defined in . All listed hostnames will be included in the certificate. All listed public hostnames is required to point to the host where datarhei Core is running on.

In order for Let's Encrypt to resolve the http-1 challenge, the HTTP server of the datarhei Core must be reachable on port 80. Either by setting to :80 or by forwarding/mapping port 80 to the actual port the HTTP server is listening on.

The obtained certificates will be stored in the /cert subdirectory of to be available after a restart.

Any provided paths in cert_file and key_file will be ignored.

By default this is set to false.

mail (string)

An email address that is required for Let's Encrypt in order to receive a certificate.

By default the email address [email protected] is used.

cert_file (string)

If you bring your own certificate, provide the path to the certificate file in PEM format.

By default this is not set.

key_file (string)

If you bring your own certificate, provide the path to the key file in PEM format

By default this is not set.

Examples

Let's Encrypt

If you want to use automatic certificates from Let's Encrypt, set and to true. and has to be set to the domain name this host will be reachable. Otherwise the ACME http-1 challenge will not work.

Self-Signed certificates

To create a self-signed certificate and key file pair, run this command and provide a reasonable value for the Common Name (CN). The CN is the fully qualified name of the host the instance is running on (e.g., localhost). You can also use an IP address or a wildcard name, e.g., *.example.com.

RSA SSL certificate

ECDSA SSL certificate

Call openssl ecparam -list_curves to see all available supported curves listed.

Skills

Skills denote the capabilities of the used FFmpeg binary. It includes version information, supported input and output protocols, available hardware accelerators, supported formats for muxing and demuxing, filters, and available input and output devices.

Read

Example:

Description:

Reload

Reloading the skills might be necessary if you plug e.g. an USB device. It will only show up in the list of available devices if they are probed again.

Example:

Custom Docker images

The Core-FFmpeg bundle uses Docker's multi-stage process so that FFmpeg and the Core can be updated and maintained independently.

When building the Core-FFmpeg bundle, an FFmpeg image is used. The previously created Golang libraries and folder structures are copied into this image.

This process speeds up the creation of the final Core-FFmpeg bundle, as existing or previously created images can be used, and compiling all the code is no longer required.

The following base images are available:

  • docker.io/datarhei/base:alpine-ffmpeg-latest

  • docker.io/datarhei/base:alpine-ffmpeg-rpi-latest

  • docker.io/datarhei/base:ubuntu-ffmpeg-cuda-latest

  • docker.io/datarhei/base:ubuntu-ffmpeg-vaapi-latest

  • docker.io/datarhei/base:alpine-core-latest

  • docker.io/datarhei/base:ubuntu-core-latest

Specific versions are available on the Docker website:

1. Create a custom FFmpeg image

1.1 Clone the FFmpeg build files

Repository

1.2 Switch to the cloned folder

1.2 Change a Dockerfile

Dockerfile without --disable-debug and --disable-doc.

1.3 Build a custom image

Arguments:

  • default Dockerfile: Dockerfile.alpine Image name: datarhei/base:alpine-ffmpeg-latest

  • rpi Dockerfile: Dockerfile.alpine.rpi Image name: datarhei/base:alpine-ffmpeg-rpi-latest

  • cuda Dockerfile: Dockerfile.ubuntu.cuda Image name: datarhei/base:ubuntu-ffmpeg-cuda-latest

  • vaapi Dockerfile: Dockerfile.ubuntu.vaapi Image name: datarhei/base:alpine-ffmpeg-vaapi-latest

2. Create a custom Core image

2.1 Clone the Core build files

Repository

2.2 Switch into the cloned folder

2.3 Build a custom image

3. Create a custom Core-FFmpeg bundle

You can find the Dockerfile for the bundle (Dockerfile.bundle) in the cloned Core repository.

3.1 Build a custom image

Docker supports multi-architecture images via --platform linux/amd64,linux/arm64,linux/arm/v7.

Support

We offer multiple forms of service and can provide support for any streaming, whether for professional broadcasting or personal use. We are excited to learn more about your project.

Helping Hands

To ensure your project's success, we offer installation services and ongoing support from the datarhei team. Helping Hands includes installing a datarhei Restreamer or datarhei Core on a server instance and managing the server as a managed service.

  1. Installation

  2. Configuration

  3. Updates

  4. Fix errors

Communication and assistance can be accessed through email or chat on our and conducted in English and German. We appreciate your support of "Helping Hands."

Within 48 hours of becoming a patron or sponsor on Open Collective, a representative from will contact you for further information regarding the installation process.

Service level agreements are in effect for the duration of the active donation.

If you have questions about the Helping Hands Service Level Agreement (SLA), contact us on or by email at [email protected]. We're here to help!"

Book now

Business Inquiries

We're the team at FOSS GmbH () from Switzerland, the creators of . We'll do our best to get back to you within 1-2 days during our regular business hours, excluding weekends and holidays in Switzerland and Germany. No matter what you need help with, we're here to provide professional support, software development, and consulting for anything related to datarhei software.

If you have a commercial request, such as a bug fix or feature enhancement, just let us know, and we'll be happy to provide a quote.

Commercial requests fund OSS and are always a priority.

Business contact

Please fill out this form to help us respond to your request as quickly as possible. We're able to assist you in both German and English. However, please remember that we cannot process any requests not submitted through this form.

Community

Private and non-commercial requests can be discussed and resolved on and public channels.

We will answer as soon as possible. But without a guarantee (except for patrons and open collective sponsors.)

Open issue on GitHub

Say hello on Discord

Donation

Every little bit helps! Consider becoming a patron on Patreon or backer on Open Collective to support keeping the software free.

Rate on Google

💙💚🧡💜 Help datarhei with a rating or review on Google. #feelsgoodtobeloved

>>

💛 Thanks for using datarhei software. Your datarhei.com team

Coding

Requirements

  • Go v1.18+ ()

Build

Clone the repository and build the binary:

After the build process, the binary is available as core

For more build options, run make help.

Repository

Cross Compile

If you want to run the binary on a different operating system and/or architecture, you create the appropriate binary by simply setting some environment variables, e.g.

Docker

Build the Docker image and run it to try out the API with FFmpeg

How to customize FFmpeg

Code style

The source code is formatted with go fmt, or run make fmt. Static analysis of the source code is done with staticcheck (see ), or run make lint.

Before committing changes, you should run make commit to ensure that the source code is in shape.

Widget (Website)

This class of endpoints provide access to information that can be used for widgets in a website. These endpoints are not protected by the API access control.

Process

Fetch minimal statistics about a process. You need to know the process ID.

This is not implemented in the GoClient.

Logging
Database
Hostname
API Security
TLS / HTTPS
Storage
RTMP
SRT
FFmpeg
Sessions
Metrics
Router
Debug
{
   "address": ":80",
   "host": {
      "name": ["domain.com"],
      "auto": false
   },
   "tls": {
      "address": ":8181",
      "enable": true,
      "auto": true,
      "mail": "[email protected]"
   }
}
CORE_ADDRESS=:80
CORE_HOST_NAME=domain.com
CORE_HOST_AUTO=false
CORE_TLS_ADDRESS=:8181
CORE_TLS_ENABLE=true
CORE_TLS_AUTO=true
[email protected]
openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out cert.pem -subj '/CN=localhost'
openssl ecparam -name secp521r1 -genkey -out key.pem
openssl req -new -x509 -key key.pem -out cert.pem -days 365 -subj '/CN=localhost'
{
   "host": {
      "name": ["domain.com"],
      "auto": false
   },
   "tls": {
      "address": ":8181",
      "enable": true,
      "auto": false,
      "cert_file": "/core/config/example.cert",
      "key_file": "/core/config/example.key"
   }
}
CORE_HOST_NAME=domain.com
CORE_HOST_AUTO=false
CORE_TLS_ADDRESS=:8181
CORE_TLS_ENABLE=true
CORE_TLS_AUTO=false
CORE_TLS_CERT_FILE=/core/config/example.cert
CORE_TLS_KEY_FILE=/core/config/example.key
{
   "tls": {
      "address": ":8181",
      "enable": false,
      "auto": false,
      "mail": "[email protected]",
      "cert_file": "",
      "key_file": "",
   }
}
CORE_TLS_ADDRESS=":8181"
CORE_TLS_ENABLE=false
CORE_TLS_AUTO=false
[email protected]
CORE_TLS_CERT_FILE=
CORE_TLS_KEY_FILE=
cert_file
key_file
auto
address
host.name
address
db.dir
tls.enable
tls.auto
host.name
curl http://127.0.0.1:8080/api/v3/skills/reload \
   -H 'accept: application/json' \
   -X GET
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

client.v3_skills_reload()
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

err := client.SkillsReload()
if err != nil {
    ...
}

skills, err := client.Skills()
fmt.Printf("%+v\n", skills)
curl http://127.0.0.1:8080/api/v3/skills \
   -H 'accept: application/json' \
   -X GET
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

core_ffmpeg_skills = client.v3_skills_get()
print(core_ffmpeg_skills)
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

skills, err := client.Skills()
fmt.Printf("%+v\n", skills)
git clone github.com/datarhei/ffmpeg
cd ffmpeg
Edit: Dockerfile.alpine
...
RUN cd /dist/ffmpeg-${FFMPEG_VERSION} && \
  patch -p1 < /contrib/ffmpeg-jsonstats.patch && \
  patch -p1 < /contrib/ffmpeg-hlsbitrate.patch && \
  ./configure \
  --extra-version=datarhei \
  --prefix="${SRC}" \
  --extra-libs="-lpthread -lxml2 -lm -lz -lsupc++ -lstdc++ -lssl -lcrypto -lz -lc -ldl" \
  --enable-nonfree \
  --enable-gpl \
  --enable-version3 \
  --enable-postproc \
  --enable-static \
  --enable-openssl \
  --enable-libxml2 \
  --enable-libv4l2 \
  --enable-v4l2_m2m \
  --enable-libfreetype \
  --enable-libsrt \
  --enable-libx264 \
  --enable-libx265 \
  --enable-libvpx \
  --enable-libmp3lame \
  --enable-libopus \
  --enable-libvorbis \
  --disable-ffplay \
  --disable-shared && \
  make -j$(nproc) && \
  make install
...
./Build.sh default
git clone [email protected]:datarhei/core.git
cd core
docker build -t datarhei/base:alpine-core-latest .
docker build \
    -f Dockerfile.bundle \
    --build-arg CORE_IMAGE=datarhei/base:alpine-core-latest \
    --build-arg FFMPEG_IMAGE=datarhei/base:alpine-ffmpeg-latest \
    -t core-bundle:dev .
$ git clone [email protected]:datarhei/core.git
$ cd core
$ make
$ env GOOS=linux GOARCH=arm go build -o core-linux-arm
$ env GOOS=linux GOARCH=arm64 go build -o core-linux-arm64
$ env GOOS=freebsd GOARCH=amd64 go build -o core-freebsd-amd64
$ env GOOS=windows GOARCH=amd64 go build -o core-windows-amd64
$ env GOOS=macos GOARCH=amd64 go build -o core-macos-amd64
...
$ docker build -t core .
$ docker build -f Dockerfile.bundle \
    --build-args CORE_IMAGE=core \
    --build-args FFMPEG_IMAGE=datarhei/base:alpine-ffmpeg-latest \
    -t core-bundle .
$ docker run -it --rm -v ${PWD}/data:/core/data -p 8080:8080 core-bundle
Download here
staticcheck
Custom Docker images
curl http://127.0.0.1:8080/api/v3/widget/process/someid \
   -H 'accept: application/json' \
   -X GET
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

core_widget_process = client.v3_widget_get_process(
    id="test"
)
print(core_widget_process)
Discord
FOSS GmbH
Discord
https://foss-gmbh.ch
datarhei.com
GitHub
Discord's
https://t.co/ZT9pRX1gG7
Go Report Card
PkgGoDev
core/CHANGELOG.md at main · datarhei/coreGitHub
GitHub - datarhei/restreamer-ui: The user interface of the Restreamer for the connection to the Core application.GitHub

Login

With auth enabled, you have to retrieve a JWT token before you can access the API calls.

Username/password login

Send the username and password, as defined in and , in the body of the request to the /api/login endpoint in order to obtain valid access and refresh JWT.

Example:

On successful login, the response looks like this:

Use the access_token in all subsequent calls to the /api/v3/ endpoints, e.g.

The expiry date is stored in the payload of the access token exp field, or the seconds until it expires is stored in the field exi.

In order to obtain a new access token, use the refresh_token for a call to /api/login/refresh:

After the refresh token expires, you have to login again with your username and password.

By creating a new core client, the login automatically happens. If the login fails, coreclient.New() will return an error.

Description:

Auth0 login

Send a valid Auth0 access JWT in the Authorization header to the /api/login endpoint in order to obtain an access and refresh JWT. The Auth0 tenant and the allowed users must be defined in the .

Example:

JWT refresh

In order to obtain a new access token, use the refresh_token for a call to /api/login/refresh. Example:

On successful login, the response looks like this:

The client handles the refresh of the tokens automatically. However, the access_token can also be updated manually:

The client handles the refresh of the tokens automatically. However, you can extract the currently used tokens from the client:

You can use these tokens to continue this session later on, given that at least the refresh token didn't expire yet. This saves the client a login round-trip:

The username and password should be provided as well, in case the refresh token expires.

Once the refresh token expires, you have to login again with your username and password, or a valid Auth0 token.

Description:

Disk

The disk filesystem gives access to the actual directory that has been provided in the configuration as . It accessible only for retrieval via HTTP under /.

Given that the requested file exists, the returned Content-Type is based solely on the file extension. For a list of known mime-types and their extensions see in the configuration.

Access

API

The contents of the disk filesystem at / are also accessible via the API in the same way as described above, but with the same protection as the API (see ) for all operations. It is also possible to list all files that are currently in the filesystem.

Create, Update

Example:

Path is the complete file path incl. file name (/a/b/c/1.txt).

The contents for the upload has to be provided as an io.Reader.

After the successful upload the file is available at /example.txt and /api/v3/fs/disk/example.txt.

Description:

Read

List all files

Listing all currently stored files is done by calling /api/v3/fs/disk. It also accepts the query parameters pattern, sort (name, size, or lastmod) and order (asc or desc). If none of the parameters are given, all files will be listed sorted by their last modification time in ascending order.

With the pattern parameter you can filter the list based on a , with the addition of the ** placeholder to include multiple subdirectories, e.g. listing all .ts file in the root directory has the pattern /*.ts, listing all .ts file in the whole filesystem has the pattern /**.ts.Example:

Description:

Download a file

For downloading a file you have to specify the complete path and filename. The Content-Type will always be application/data.

Example:

The returned data is an io.ReadCloser.

Description:

Delete

For deleting a file you have to specify the complete path and filename.

Example:

Description:

Configuration

Probe

Probing a process means to detect the vitals of the inputs of a process, e.g. frames, bitrate, codec, ... for each input and stream, e.g. a video file on disk may contain two video streams (low and high resolution, audio and subtitle streams in different languages).

A process must already exists before it can be probed. During probing only the global FFmpeg options and the inputs are used to construct the FFmpeg command line.

The probe returns an object with an array of the detected streams and any array of lines from the output from the ffmpeg command.

In the following example we assume the process config with these inputs for the process test. Parts that are not relevant for probing have been left out for brevity:

The expected result would be:

The field index refers to the input and the field stream refers to the stream of an input.

Example: probe the inputs of a process with the ID test:

Description:

curl http://127.0.0.1:8080/api/login/refresh \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -H 'Authorization: Bearer eyJz93a...k4laUWx' \
   -X GET
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080",
    username="YOUR_USERNAME",
    password="YOUR_PASSWORD",
)
client.login()

print(client.token())
import "github.com/datarhei/core-client-go/v16"

client, err := coreclient.New(coreclient.Config{
    Address: "http://127.0.0.1:8080",
    Username: "YOUR_USERNAME",
    Password: "YOUR_PASSWORD",
})
if err != nil {
    ...
}
curl http://127.0.0.1:8080/api/login \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -H 'Authorization: Bearer eyJz93a...k4laUWw' \
   -X POST
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080",
    auth0_token="eyJz93a...k4laUWw",
)
client.login()
import "github.com/datarhei/core-client-go/v16"

client, err := coreclient.New(coreclient.Config{
    Address: "http://127.0.0.1:8080",
    Auth0Token: "eyJz93a...k4laUWw",
})
if err != nil {
    ...
}
curl http://127.0.0.1:8080/api/login/refresh \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleGkiOjg2NDAwLCJleHAiOjE2NzA1Mjc2MjUsImlhdCI6MTY3MDQ0MTIyNSwiaXNzIjoiZGF0YXJoZWktY29yZSIsImp0aSI6IjczM2Q4Y2UxLTY3YjEtNDM3Yy04YzQ1LTM3Yjg4MmZjMWExZiIsInN1YiI6ImFkbWluIiwidXNlZm9yIjoicmVmcmVzaCJ9.3lqZFJeN7ILfM4DTi0-ZJ7kAzqTMR-yRgRl3o89O-jY' \
   -X GET
{
   "access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleGkiOjYwMCwiZXhwIjoxNjcwNDQxODI1LCJpYXQiOjE2NzA0NDEyMjUsImlzcyI6ImRhdGFyaGVpLWNvcmUiLCJqdGkiOiJhZWU4OTZhNS05ZThhLTRlMGQtYjk4Zi01NTA3NTUwNzA2YzUiLCJzdWIiOiJhZG1pbiIsInVzZWZvciI6ImFjY2VzcyJ9.xrnIfNZU9Z0nrUxYddpPQOMO7ypHA2vuqrYuAr95elg"
}
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080",
    refresh_token="eyJz93a...k4laUWw",
)
client.token_refresh()
accessToken, refreshToken := client.Tokens()
client, err := coreclient.New(coreclient.Config{
    Address: "http://127.0.0.1:8080",
    Username: "YOUR_USERNAME",
    Password: "YOUR_PASSWORD",
    AccessToken: accessToken,
    RefreshToken: refreshToken,
})
api.auth.username
api.auth.password
configuration
curl http://127.0.0.1:8080/api/login \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X POST \
   -d '{
         "username": "YOUR_USERNAME",
         "password": "YOUR_PASSWORD"
      }'
{
   "access_token": "eyJz93a...k4laUWw",
   "refresh_token": "eyJz93a...k4laUWx"
}
curl http://127.0.0.1:8080/api/ \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -H 'Authorization: Bearer eyJz93a...k4laUWw' \
   -X GET
echo 'test' > example.txt && \
curl http://127.0.0.1:8080/api/v3/fs/disk/example.txt \
   -d @example.txt \
   -X PUT
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

client.v3_fs_put_file(
    name: "disk",
    path: "example.txt",
    data: b"test"
)
import (
    "strings"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

data := strings.NewReader("test")
err := client.FilesystemAddFile("disk", "/example.txt", data)
curl "http://127.0.0.1:8080/api/v3/fs/disk?sort=name&order=asc" \
   -X GET
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

core_memfs_list = client.v3_fs_get_file_list(name="disk")
print(core_memfs_list)
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

files, err := client.FilesystemList("disk", "/*.", coreclient.SORT_NAME, coreclient.ORDER_ASC)

for _, file := range files {
    fmt.Printf("%+v\n", file)
}
curl http://127.0.0.1:8080/api/v3/fs/disk/example.txt \
   -X GET
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

core_memfs_file = client.v3_fs_get_file(
    name="disk",
    path="example.txt"
)
print(core_memfs_file)
import (
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

data, err := client.FilesystemGetFile("disk", "/example.txt")
defer data.Close()
curl http://127.0.0.1:8080/api/v3/fs/disk/example.txt \
   -X DELETE
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

client.v3_fs_delete_file(
    name="disk",
    path="example.txt"
)
import (
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

err := client.FilesystemDeleteFile("disk", "/example.txt")
storage.disk.dir
curl http://127.0.0.1:8080/path/to/a/file.txt -o file.txt
storage.mime_types
API-Security configuration
glob pattern
Disk
{
    "options": ["-err_detect", "ignore_err", "-y"],
    "input": [
      {
        "address": "testsrc2=rate=25:size=640x360",
        "id": "input_0",
        "options": ["-f", "lavfi", "-re"]
      },
      {
        "address": "anullsrc=r=44100:cl=stereo",
        "id": "input_1",
        "options": ["-f", "lavfi"]
      }
    ]
}
{
  "log": [
    "ffmpeg version 5.1.2 Copyright (c) 2000-2022 the FFmpeg developers",
    "  built with Apple clang version 14.0.0 (clang-1400.0.29.102)",
    "  configuration: --prefix=/usr/local/Cellar/ffmpeg/5.1.2 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox",
    "  libavutil      57. 28.100 / 57. 28.100",
    "  libavcodec     59. 37.100 / 59. 37.100",
    "  libavformat    59. 27.100 / 59. 27.100",
    "  libavdevice    59.  7.100 / 59.  7.100",
    "  libavfilter     8. 44.100 /  8. 44.100",
    "  libswscale      6.  7.100 /  6.  7.100",
    "  libswresample   4.  7.100 /  4.  7.100",
    "  libpostproc    56.  6.100 / 56.  6.100",
    "Input #0, lavfi, from 'testsrc2=rate=25:size=640x360':",
    "  Duration: N/A, start: 0.000000, bitrate: N/A",
    "  Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 640x360 [SAR 1:1 DAR 16:9], 25 tbr, 25 tbn",
    "Input #1, lavfi, from 'anullsrc=r=44100:cl=stereo':",
    "  Duration: N/A, start: 0.000000, bitrate: 705 kb/s",
    "  Stream #1:0: Audio: pcm_u8, 44100 Hz, stereo, u8, 705 kb/s",
    "At least one output file must be specified"
  ],
  "streams": [
    {
      "bitrate_kbps": 0,
      "channels": 0,
      "codec": "rawvideo",
      "coder": "",
      "duration_sec": 0,
      "format": "lavfi",
      "fps": 0,
      "height": 360,
      "index": 0,
      "language": "und",
      "layout": "",
      "pix_fmt": "yuv420p",
      "sampling_hz": 0,
      "stream": 0,
      "type": "video",
      "url": "testsrc2=rate=25:size=640x360",
      "width": 640
    },
    {
      "bitrate_kbps": 705,
      "channels": 0,
      "codec": "pcm_u8",
      "coder": "",
      "duration_sec": 0,
      "format": "lavfi",
      "fps": 0,
      "height": 0,
      "index": 1,
      "language": "und",
      "layout": "stereo",
      "pix_fmt": "",
      "sampling_hz": 44100,
      "stream": 0,
      "type": "audio",
      "url": "anullsrc=r=44100:cl=stereo",
      "width": 0
    }
  ]
}
curl http://127.0.0.1:8080/api/v3/process/test/probe \
   -H 'accept: application/json' \
   -X GET
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

core_process_test_probe = client.v3_process_get_probe(
    id="test"
)
print(core_process_test_probe)
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

probe, err := client.ProcessProbe("test")
if err != nil {
    ...
}

fmt.Printf("%+v\n", probe)
Releases · datarhei/coreGitHub
Swagger UI

S3

S3 filesystems are only accessible via HTTP their configured mountpoint. Use the POST or PUT method with the path to that file to (over-)write a file. The body of the request contains the contents of the file. No particular encoding or Content-Type is required. The file can then be downloaded from the same path.

This filesystem is practical rarely changing data (e.g. VOD) for long term storage.

On this page and in the examples we assume that a S3 storage with the name aws is mounted on /awsfs.

curl http://127.0.0.1:8080/awsfs/path/to/a/file.txt -X PUT -d @somefile.txt
curl http://127.0.0.1:8080/awsfs/path/to/a/file.txt -o file.txt
curl http://127.0.0.1:8080/awsfs/path/to/a/file.txt -X DELETE

The returned Content-Type is based solely on the file extension. For a list of known mime-types and their extensions see storage.mime_types in the configuration.

Access protection

It is strongly recommended to enable a username/password (HTTP Basic-Auth) protection for any PUT/POST and DELETE operations on /memfs. GET operations are not protected.

By default HTTP Basic-Auth is not enabled.

API

The contents of the S3 filesystem mounted on /awsfs are also accessible via the API in the same way as described above, but with the same protection as the API (see API-Security configuration) for all operations. It is also possible to list all files that are currently in the filesystem.

Create, Update

Example:

echo 'test' > example.txt && \
curl http://127.0.0.1:8080/api/v3/fs/aws/example.txt \
   -d @example.txt \
   -X PUT
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

client.v3_fs_put_file(
    name: "aws",
    path: "example.txt",
    data: b"test"
)
import (
    "strings"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

data := strings.NewReader("test")
err := client.FilesystemAddFile("aws", "/example.txt", data)

The contents for the upload has to be provided as an io.Reader.

After the successful upload the file is available at /awsfs/example.txt and /api/v3/fs/aws/example.txt.

Description:

Read

List all files

Listing all currently stored files is done by calling /api/v3/fs/aws. It also accepts the query parameters pattern, sort (name, size, or lastmod) and order (asc or desc). If none of the parameters are given, all files will be listed sorted by their last modification time in ascending order.

With the pattern parameter you can filter the list based on a glob pattern, with the addition of the ** placeholder to include multiple subdirectories, e.g. listing all .ts file in the root directory has the pattern /*.ts, listing all .ts file in the whole filesystem has the pattern /**.ts.

Example:

curl "http://127.0.0.1:8080/api/v3/fs/aws?sort=name&order=asc" \
   -X GET
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

core_memfs_list = client.v3_fs_get_file_list(name="aws")
print(core_memfs_list)
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

files, err := client.FilesystemList("aws", "/*.", coreclient.SORT_NAME, coreclient.ORDER_ASC)

for _, file := range files {
    fmt.Printf("%+v\n", file)
}

Description:

Download a file

For downloading a file you have to specify the complete path and filename. The Content-Type will always be application/data.

Example:

curl http://127.0.0.1:8080/api/v3/fs/aws/example.txt \
   -X GET
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

core_memfs_file = client.v3_fs_get_file(
    name="aws",
    path="example.txt"
)
print(core_memfs_file)
import (
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

data, err := client.FilesystemGetFile("aws", "/example.txt")
defer data.Close()

The returned data is an io.ReadCloser.

Description:

Delete

For deleting a file you have to specify the complete path and filename.

Example:

curl http://127.0.0.1:8080/api/v3/fs/aws/example.txt \
   -X DELETE
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

client.v3_fs_delete_file(
    name="aws",
    path="example.txt"
)
import (
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

err := client.FilesystemDeleteFile("aws", "/example.txt")

Description:

Configuration

SRT

The datarhei Core includes a simple SRT server for publishing and playing streams. Check out the SRT configuration and the SRT guide. This API endpoint will list the details of all currently publishing and playing streams.

This endpoint is still experimental and may change in a later minor version increase.

curl http://127.0.0.1:8080/api/v3/srt \
   -H 'accept: application/json' \
   -X GET
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

core_srt = client.v3_srt_get()
print(core_srt)
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

channels, err := client.SRTChannels()
fmt.Printf("%+v\n", channels)
GitHub - datarhei/core-client-python: An API client library for datarhei Core in Python3.GitHub
Install Docker EngineDocker Documentation
Logo
S3
S3
GitHub - datarhei/core-client-go: An API client library for datarhei Core in GoGitHub

Report

The process report captures the log output from FFmpeg. The output is split up in a prelude and log output. The prelude is everything before the first progress line is printed. Everything after that is part of the log. The progress lines are not part of the report. Each log line comes with a timestamp of when it has been captured from the FFmpeg output.

The process report helps you to analyze any problems with the FFmpeg command line build from the inputs, outputs, and their options in the process configuration.

If the process is running, the FFmpeg logs will be written to the current report. As soon as the process finishes, the report will be moved to the history. The report history is also part of the response of this API endpoint.

You can define the number of log lines and how many reports should be kept in the history in the config for the datarhei Core in the ffmpeg.log section.

The following is an example of a report.

{
  "created_at": 1671109892,
  "history": [],
  "log": [
    [
      "1671111449",
      "[hls @ 0x7fa31f810240] Opening 'http://admin:xxx@localhost:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.m3u8' for writing"
    ],
    [
      "1671111449",
      "[hls @ 0x7fa31f810240] Opening 'http://admin:xxx@localhost:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.mp4' for writing"
    ],
    [
      "1671111451",
      "[hls @ 0x7fa31f810240] Opening 'http://admin:xxx@localhost:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf_output_0_0778.mp4' for writing"
    ],
    [
      "1671111451",
      "[mp4 @ 0x7fa31f80ee40] Opening 'http://admin:xxx@localhost:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf_output_0_0768.mp4' for writing"
    ],
    [
      "1671111451",
      "[hls @ 0x7fa31f810240] Opening 'http://admin:xxx@localhost:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.mp4' for writing"
    ],
    [
      "1671111453",
      "[hls @ 0x7fa31f810240] Opening 'http://admin:xxx@localhost:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf_output_0_0779.mp4' for writing"
    ],
    [
      "1671111453",
      "[mp4 @ 0x7fa31f80ee40] Opening 'http://admin:xxx@localhost:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf_output_0_0769.mp4' for writing"
    ],
    [
      "1671111453",
      "[hls @ 0x7fa31f810240] Opening 'http://admin:xxx@localhost:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.m3u8' for writing"
    ],
    ...
  ],
  "prelude": [
    "ffmpeg version 5.1.2 Copyright (c) 2000-2022 the FFmpeg developers",
    "  built with Apple clang version 14.0.0 (clang-1400.0.29.102)",
    "  configuration: --prefix=/usr/local/Cellar/ffmpeg/5.1.2 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox",
    "  libavutil      57. 28.100 / 57. 28.100",
    "  libavcodec     59. 37.100 / 59. 37.100",
    "  libavformat    59. 27.100 / 59. 27.100",
    "  libavdevice    59.  7.100 / 59.  7.100",
    "  libavfilter     8. 44.100 /  8. 44.100",
    "  libswscale      6.  7.100 /  6.  7.100",
    "  libswresample   4.  7.100 /  4.  7.100",
    "  libpostproc    56.  6.100 / 56.  6.100",
    "Input #0, lavfi, from 'testsrc2=rate=25:size=640x360':",
    "  Duration: N/A, start: 0.000000, bitrate: N/A",
    "  Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 640x360 [SAR 1:1 DAR 16:9], 25 tbr, 25 tbn",
    "Input #1, lavfi, from 'anullsrc=r=44100:cl=stereo':",
    "  Duration: N/A, start: 0.000000, bitrate: 705 kb/s",
    "  Stream #1:0: Audio: pcm_u8, 44100 Hz, stereo, u8, 705 kb/s",
    "Stream mapping:",
    "  Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))",
    "  Stream #1:0 -> #0:1 (pcm_u8 (native) -> aac (native))",
    "Press [q] to stop, [?] for help",
    "[libx264 @ 0x7fa31ef08540] using SAR=1/1",
    "[libx264 @ 0x7fa31ef08540] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2",
    "[libx264 @ 0x7fa31ef08540] profile Constrained Baseline, level 3.0, 4:2:0, 8-bit",
    "[libx264 @ 0x7fa31ef08540] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=4 lookahead_threads=4 sliced_threads=1 slices=4 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=50 keyint_min=26 scenecut=0 intra_refresh=0 rc_lookahead=0 rc=cbr mbtree=0 bitrate=1024 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=1024 vbv_bufsize=1024 nal_hrd=none filler=0 ip_ratio=1.40 aq=0",
    "[hls @ 0x7fa31f810240] Opening 'http://admin:xxx@localhost:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.mp4' for writing",
    "[http @ 0x7fa31f80fac0] HTTP error 404 Not Found",
    "Output #0, tee, to '[f=hls:start_number=0:hls_time=2:hls_list_size=6:hls_flags=append_list+delete_segments+program_date_time+independent_segments+temp_file:hls_delete_threshold=4:hls_segment_type=fmp4:hls_fmp4_init_filename=f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.mp4:hls_fmp4_init_resend=1:hls_segment_filename=http\\\\://admin\\\\:xxx@localhost\\\\:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf_output_0_%04d.mp4:master_pl_name=f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.m3u8:master_pl_publish_rate=2:method=PUT]http://admin:xxx@localhost:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf_output_0.m3u8|[f=flv]rtmp://localhost:1935/live/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.stream?token=foobar|[f=mpegts]srt://localhost:6000?mode=caller\u0026transtype=live\u0026streamid=f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf,mode:publish,token:abc':",
    "  Metadata:",
    "    title           : http://example.com:8080/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf/oembed.json",
    "    service_provider: datarhei-Restreamer",
    "    encoder         : Lavf59.27.100",
    "  Stream #0:0: Video: h264 ([7][0][0][0] / 0x0007), yuv420p(progressive), 640x360 [SAR 1:1 DAR 16:9], q=2-31, 1024 kb/s, 25 fps, 25 tbn",
    "    Metadata:",
    "      encoder         : Lavc59.37.100 libx264",
    "    Side data:",
    "      cpb: bitrate max/min/avg: 1024000/0/1024000 buffer size: 1024000 vbv_delay: N/A",
    "  Stream #0:1: Audio: aac (LC) ([10][0][0][0] / 0x000A), 44100 Hz, stereo, fltp, 64 kb/s",
    "    Metadata:",
    "      encoder         : Lavc59.37.100 aac"
  ]
}

Example: read the report of a process with the ID test:

curl http://127.0.0.1:8080/api/v3/process/test/report \
   -H 'accept: application/json' \
   -X GET
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

core_process_test_report = client.v3_process_get_report(
    id="test"
)
print(core_process_test_report)
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

report, err := client.ProcessReport("test")
if err != nil {
    ...
}

fmt.Printf("%+v\n", report)

Description:

Logo

In-memory

A very simple in-memory filesystem is available which is only accessible via HTTP under /memfs. Use the POST or PUT method with the path to that file. The body of the request contains the contents of the file. No particular encoding or Content-Type is required. The file can then be downloaded from the same path.

This filesystem is practical for often changing data (e.g. HLS live stream) in order not to stress the disk or to wear out a flash drive. Also you don't need to setup a RAM drive or similar on your system.

The returned Content-Type is based solely on the file extension. For a list of known mime-types and their extensions see in the configuration.

Access protection

It is strongly recommended to enable a username/password (HTTP Basic-Auth) protection for any PUT/POST and DELETE operations on /memfs. GET operations are not protected.

By default HTTP Basic-Auth is enabled with the username "admin" and a random password.

Example:

Use these endpoints to, e.g., store HLS chunks and .m3u8 files (in contrast to an actual disk or a ramdisk):

Then you can play it generally with, e.g., ffplay http://127.0.0.1:8080/memfs/foobar.m3u8.

API

The contents of the /memfs are also accessible via the API in the same way as described above, but with the same protection as the API (see ) for all operations. It is also possible to list all files that are currently in the filesystem.

Create, Update

Example:

The contents for the upload has to be provided as an io.Reader.

After the successful upload the file is available at /memfs/example.txt and /api/v3/fs/mem/example.txt.

Description:

Read

List all files

Listing all currently stored files is done by calling /api/v3/fs/mem. It also accepts the query parameters pattern, sort (name, size, or lastmod) and order (asc or desc). If none of the parameters are given, all files will be listed sorted by their last modification time in ascending order.

With the pattern parameter you can filter the list based on a , with the addition of the ** placeholder to include multiple subdirectories, e.g. listing all .ts file in the root directory has the pattern /*.ts, listing all .ts file in the whole filesystem has the pattern /**.ts.

Example:

Description:

Download a file

For downloading a file you have to specify the complete path and filename. The Content-Type will always be application/data.

Example:

The returned data is an io.ReadCloser.

Description:

Link

Linking a file will return a redirect to the linked file. The target of the redirect has to be in the body of the request.

Example:

This is not implemented in the client.

Description:

Delete

For deleting a file you have to specify the complete path and filename.

Example:

Description:

Configuration

State

The process state reflects the current vitals of an process. This includes for example the runtime the process is already in this state, the order (if it should be running or stopped, see ), the current CPU and memory consumption, the actual FFmpeg command line, and some more.

If the process is running you will recieve progress data additionally to the above mentioned metrics. Progress data includes for each input and output stream the bitrate, framerate, bytes read/written, the speed of the processing, duplicated, dropped frames, and so on.

In the following an example output for a running process:

Example: read the state of a process with the ID test:

Description:

Logo
https://github.com/datarhei/coregithub.com
ffmpeg -f lavfi -re -i testsrc2=size=640x480:rate=25 -c:v libx264 -preset:v ultrafast -r 25 -g 50 -f hls -start_number 0 -hls_time 2 -hls_list_size 6 -hls_flags delete_segments+temp_file+append_list -method PUT -hls_segment_filename http://127.0.0.1:8080/memfs/foobar_%04d.ts -y http://127.0.0.1:8080/memfs/foobar.m3u8
echo 'test' > example.txt && \
curl http://127.0.0.1:8080/api/v3/fs/mem/example.txt \
   -d @example.txt \
   -X PUT
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

client.v3_fs_put_file(
    name: "mem",
    path: "example.txt",
    data: b"test"
)
import (
    "strings"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

data := strings.NewReader("test")
err := client.FilesystemAddFile("mem", "/example.txt", data)
curl "http://127.0.0.1:8080/api/v3/fs/mem?sort=name&order=asc" \
   -X GET
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

core_memfs_list = client.v3_fs_get_file_list(name="mem")
print(core_memfs_list)
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

files, err := client.FilesystemList("mem", "/*.", coreclient.SORT_NAME, coreclient.ORDER_ASC)

for _, file := range files {
    fmt.Printf("%+v\n", file)
}
curl http://127.0.0.1:8080/api/v3/fs/mem/example.txt \
   -X GET
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

core_memfs_file = client.v3_fs_get_file(
    name="mem",
    path="example.txt"
)
print(core_memfs_file)
import (
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

data, err := client.FilesystemGetFile("mem", "/example.txt")
defer data.Close()
curl http://127.0.0.1:8080/api/v3/fs/mem/example2.txt \
   -d 'example.txt'
   -X PATCH
Missed.
curl http://127.0.0.1:8080/api/v3/fs/mem/example.txt \
   -X DELETE
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

client.v3_fs_delete_file(
    name="mem",
    path="example.txt"
)
import (
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

err := client.FilesystemDeleteFile("mem", "/example.txt")
storage.mime_types
API-Security configuration
glob pattern
curl http://127.0.0.1:8080/memfs/path/to/a/file.txt -X PUT -d @somefile.txt
curl http://127.0.0.1:8080/memfs/path/to/a/file.txt -o file.txt
curl http://127.0.0.1:8080/memfs/path/to/a/file.txt -X DELETE
In-memory
In-memory
{
  "command": [
    "-err_detect",
    "ignore_err",
    "-y",
    "-f",
    "lavfi",
    "-re",
    "-i",
    "testsrc2=rate=25:size=640x360",
    "-f",
    "lavfi",
    "-i",
    "anullsrc=r=44100:cl=stereo",
    "-dn",
    "-sn",
    "-map",
    "0:0",
    "-codec:v",
    "libx264",
    "-preset:v",
    "ultrafast",
    "-b:v",
    "1024k",
    "-maxrate:v",
    "1024k",
    "-bufsize:v",
    "1024k",
    "-r",
    "25",
    "-sc_threshold",
    "0",
    "-pix_fmt",
    "yuv420p",
    "-g",
    "50",
    "-keyint_min",
    "50",
    "-fps_mode",
    "cfr",
    "-tune:v",
    "zerolatency",
    "-map",
    "1:0",
    "-filter:a",
    "aresample=osr=44100:ochl=stereo",
    "-codec:a",
    "aac",
    "-b:a",
    "64k",
    "-shortest",
    "-flags",
    "+global_header",
    "-tag:v",
    "7",
    "-tag:a",
    "10",
    "-f",
    "tee",
    "[f=hls:start_number=0:hls_time=2:hls_list_size=6:hls_flags=append_list+delete_segments+program_date_time+independent_segments+temp_file:hls_delete_threshold=4:hls_segment_type=fmp4:hls_fmp4_init_filename=f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.mp4:hls_fmp4_init_resend=1:hls_segment_filename=http\\\\://admin\\\\:xxx@localhost\\\\:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf_output_0_%04d.mp4:master_pl_name=f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.m3u8:master_pl_publish_rate=2:method=PUT]http://admin:xxx@localhost:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf_output_0.m3u8|[f=flv]rtmp://localhost:1935/live/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.stream?token=foobar|[f=mpegts]srt://localhost:6000?mode=caller\u0026transtype=live\u0026streamid=f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf,mode:publish,token:abc"
  ],
  "cpu_usage": 3.022,
  "exec": "running",
  "last_logline": "[hls @ 0x7fa31f810240] Opening 'http://admin:xxx@localhost:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.mp4' for writing",
  "memory_bytes": 21532672,
  "order": "start",
  "progress": {
    "bitrate_kbit": 0,
    "drop": 0,
    "dup": 0,
    "fps": 24.833,
    "frame": 62221,
    "inputs": [
      {
        "address": "testsrc2=rate=25:size=640x360",
        "avstream": null,
        "bitrate_kbit": 0,
        "codec": "rawvideo",
        "coder": "",
        "format": "lavfi",
        "fps": 0,
        "frame": 0,
        "height": 360,
        "id": "input_0",
        "index": 0,
        "packet": 0,
        "pix_fmt": "yuv420p",
        "pps": 0,
        "size_kb": 0,
        "stream": 0,
        "type": "video",
        "width": 640
      },
      {
        "address": "anullsrc=r=44100:cl=stereo",
        "avstream": null,
        "bitrate_kbit": 0,
        "codec": "pcm_u8",
        "coder": "",
        "format": "lavfi",
        "fps": 0,
        "frame": 0,
        "id": "input_1",
        "index": 1,
        "layout": "stereo",
        "packet": 0,
        "pps": 0,
        "sampling_hz": 44100,
        "size_kb": 0,
        "stream": 0,
        "type": "audio"
      }
    ],
    "outputs": [
      {
        "address": "[f=hls:start_number=0:hls_time=2:hls_list_size=6:hls_flags=append_list+delete_segments+program_date_time+independent_segments+temp_file:hls_delete_threshold=4:hls_segment_type=fmp4:hls_fmp4_init_filename=f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.mp4:hls_fmp4_init_resend=1:hls_segment_filename=http\\\\://admin\\\\:xxx@localhost\\\\:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf_output_0_%04d.mp4:master_pl_name=f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.m3u8:master_pl_publish_rate=2:method=PUT]http://admin:xxx@localhost:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf_output_0.m3u8|[f=flv]rtmp://localhost:1935/live/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.stream?token=foobar|[f=mpegts]srt://localhost:6000?mode=caller\u0026transtype=live\u0026streamid=f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf,mode:publish,token:abc",
        "avstream": null,
        "bitrate_kbit": 0,
        "codec": "h264",
        "coder": "",
        "format": "tee",
        "fps": 0,
        "frame": 0,
        "height": 360,
        "id": "output_0",
        "index": 0,
        "packet": 0,
        "pix_fmt": "yuv420p",
        "pps": 0,
        "size_kb": 0,
        "stream": 0,
        "type": "video",
        "width": 640
      },
      {
        "address": "[f=hls:start_number=0:hls_time=2:hls_list_size=6:hls_flags=append_list+delete_segments+program_date_time+independent_segments+temp_file:hls_delete_threshold=4:hls_segment_type=fmp4:hls_fmp4_init_filename=f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.mp4:hls_fmp4_init_resend=1:hls_segment_filename=http\\\\://admin\\\\:xxx@localhost\\\\:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf_output_0_%04d.mp4:master_pl_name=f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.m3u8:master_pl_publish_rate=2:method=PUT]http://admin:xxx@localhost:8080/memfs/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf_output_0.m3u8|[f=flv]rtmp://localhost:1935/live/f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf.stream?token=foobar|[f=mpegts]srt://localhost:6000?mode=caller\u0026transtype=live\u0026streamid=f13d9ff4-8ac1-458d-8d61-6e7d8ad06faf,mode:publish,token:abc",
        "avstream": null,
        "bitrate_kbit": 0,
        "codec": "aac",
        "coder": "",
        "format": "tee",
        "fps": 0,
        "frame": 0,
        "id": "output_0",
        "index": 0,
        "layout": "stereo",
        "packet": 0,
        "pps": 0,
        "sampling_hz": 44100,
        "size_kb": 0,
        "stream": 1,
        "type": "audio"
      }
    ],
    "packet": 0,
    "q": 29,
    "size_kb": 0,
    "speed": 1,
    "time": 2488.84
  },
  "reconnect_seconds": -1,
  "runtime_seconds": 2489
}
curl http://127.0.0.1:8080/api/v3/process/test/state \
   -H 'accept: application/json' \
   -X GET
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

core_process_test_state = client.v3_process_get_state(
    id="test"
)
print(core_process_test_state)
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

state, err := client.ProcessState("test")
if err != nil {
    ...
}

fmt.Printf("%+v\n", state)
Command
Issues · datarhei/coreGitHub
datarhei - Open Collectiveopencollect
Join the datarhei Discord Server!Discord
datarhei - Helping Hands - Open Collectiveopencollect
https://docs.google.com/forms/d/e/1FAIpQLSeYz5XnatjGI4OHcJFXo9q7B84rYZ0OZK_-QABcmTBLubKvvg/viewformdocs.google.com
datarhei | bietet streaming software | PatreonPatreon
GitHub - datarhei/ffmpeg: FFmpeg base image for datarhei/core.GitHub
https://github.com/datarhei/coregithub.com
Docker

FFmpeg

Settings for the FFmpeg binary.

Configuration

{
   "ffmpeg": {
      "binary": "ffmpeg",
      "max_processes": 0,
      "access": {
         "input": {
            "allow": [],
            "block": []
         },
         "output": {
            "allow": [],
            "block": []
         }
      },
      "log": {
         "max_lines": 50,
         "max_history": 3
      }
   }
}
CORE_FFMPEG_BINARY="ffmpeg"
CORE_FFMPEG_MAX_PROCESSES=0
CORE_FFMPEG_ACCESS_INPUT_ALLOW=a,b,c
CORE_FFMPEG_ACCESS_INPUT_BLOCK=a,b,c
CORE_FFMPEG_ACCESS_OUTPUT_ALLOW=a,b,c
CORE_FFMPEG_ACCESS_OUTPUT_BLOCK=a,b,c

binary (string)

Path to the ffmpeg binary. The system's %PATH will be searched for the ffmpeg binary. You can also provide an absolute or relative path to the binary.

By default this value is set to ffmpeg.

max_processes (integer)

The maximum number of simultaneously running ffmpeg instances. Set this value to 0 in order to not impose any limit.

By default this value is set to 0.

access.*

To control where FFmpeg can read from and where FFmpeg can write from, you can define patterns that matches the input addresses or the output addresses. These patterns are regular expressions that can be provided here. For the respective environment variables the expressions need to be space-separated, e.g. https?:// rtsp:// rtmp://.

Independently of the values of access.output there's a check that verifies that output can only be written to the directory specified in storage.disk.dir and works as follows: if the address has a protocol specifier other than file:, then no further checks will be applied. If the protocol is file: or no protocol specifier is given, the address is assumed to be a path that is checked to be inside of storage.disk.dir.

It will be rejected if the address is outside the storage.disk.dir directory. Otherwise, the protocol file: will be prepended. If you want to explicitely allow or block access to the filesystem, use file: as pattern in the respective list.

Special cases are the output addresses - (which will be rewritten to pipe:), and /dev/null, which will be allowed even though it's outside of storage.disk.dir.

access.input.allow (array)

List of patterns for allowed inputs.

By default this list is empty, i.e. all inputs are allowed.

access.input.block (array)

List of patterns for disallowed inputs.

By default this list is empty, i.e. no inputs are blocked.

access.output.allow (array)

List of patterns for allowed outputs.

By default this list is empty, i.e. all outputs are allowed.

access.output.block (array)

List of patterns for disallowed outputs.

By default this list is empty, i.e. no outputs are blocked.

log.max_lines (integer)

The number of latest FFmpeg log lines for each process to keep.

By default this value is set to 50 lines.

log.max_history (integer)

The number of historic logs for each process to keep.

By default this value is set to 3.

Logo
Logo
Logo
Logo
Logo

Config

The /api/v3/config endpoints allow you to inspect, modify, and reload the configuration of the datarhei Core.

Read

Retrieve the currently active Core configuration with additionally a list of all fields that have an override by an environment variable. Such fields can be changed, but it will have no effect because the enviroment variable will always override the value.

Example:

curl http://127.0.0.1:8080/api/v3/config \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X GET
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

core_config = client.v3_config_get()
print(core_config)
import (
    "github.com/datarhei/core-client-go/v16"
    "github.com/datarhei/core-client-go/v16/api"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

version, config, err := client.Config()

The actual config is in config.Config which is an interface{} type. Depending on the returned version, you have to cast it to the corresponding type in order to access the fields in the config:

if version == 1 {
    configv1 := config.Config.(api.ConfigV1)
} else if version == 2 {
    configv2 := config.Config.(api.ConfigV2)
} else if version == 3 {
    configv3 := config.Config.(api.ConfigV3)
}

Description:

Update

Upload a modified configuration. You can provide a partial configuration with only the fields you want to change. The minimal valid configuration you have to provide contains the version number:

{
    "version": 3
}

Example:

curl http://127.0.0.1:8080/api/v3/config \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X PUT \
   -d '{
         "version": 3,
         "name": "core-1",
         "log": {
            "level": "debug"
         }
      }'
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

client.v3_config_put(config: {
   "version": 3,
   "name": "core-1",
   "log": {
      "level": "debug"
   }
})
import (
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

type cfg struct{
    Version int64 `json:"version"`
    Name string `json:"name"`
    Log struct{
        Level string `json:"level"`
    } `json:"log"`
}

var config = &cfg{
    Version: 3,
    Name: "core-1",
}
config.Log.Level = "debug"

err := client.ConfigSet(config)

This has no effect until the configuration is reloaded.

Description:

Reload

After changing the configuration, the datarhei Core has to be restarted in order to reload the changed configuration.

Example:

curl http://127.0.0.1:8080/api/v3/config/reload \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X GET
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

client.v3_config_reload()
import (
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

err := client.ConfigReload()

Configuration reload will restart the Core! The in-memory file system and sessions will remain intact.

Description:

Configuration

Complete config example:

{
    "version": 3,
    "id": "03a1d1af-b947-4a13-a169-d3f2238714ee",
    "name": "super-core-1337",
    "address": ":8080",
    "update_check": true,
    "log": {
        "level": "info",
        "topics": [],
        "max_lines": 1000
    },
    "db": {
        "dir": "/core/config"
    },
    "host": {
        "name": [
            "example.com"
        ],
        "auto": true
    },
    "api": {
        "read_only": false,
        "access": {
            "http": {
                "allow": [],
                "block": []
            },
            "https": {
                "allow": [],
                "block": []
            }
        },
        "auth": {
            "enable": true,
            "disable_localhost": false,
            "username": "admin",
            "password": "datarhei",
            "jwt": {
                "secret": "cz$L%a(d%%[lLh;Y8dahIjcQx+tBq(%5"
            },
            "auth0": {
                "enable": false,
                "tenants": []
            }
        }
    },
    "tls": {
        "address": ":8181",
        "enable": true,
        "auto": true,
        "email": "[email protected]",
        "cert_file": "",
        "key_file": ""
    },
    "storage": {
        "disk": {
            "dir": "/core/data",
            "max_size_mbytes": 50000,
            "cache": {
                "enable": true,
                "max_size_mbytes": 500,
                "ttl_seconds": 300,
                "max_file_size_mbytes": 1,
                "types": {
                    "allow": [],
                    "block": [
                        ".m3u8",
                        ".mpd"
                    ]
                }
            }
        },
        "memory": {
            "auth": {
                "enable": true,
                "username": "admin",
                "password": "WH8y3alD7pHGsuBGwb"
            },
            "max_size_mbytes": 2000,
            "purge": true
        },
        "s3": [],
        "cors": {
            "origins": [
                "*"
            ]
        },
        "mimetypes_file": "./mime.types"
    },
    "rtmp": {
        "enable": true,
        "enable_tls": true,
        "address": ":1935",
        "address_tls": ":1936",
        "app": "/",
        "token": "n4jk235nNJKN4k5n24k"
    },
    "srt": {
        "enable": true,
        "address": ":6000",
        "passphrase": "n23jk4DD5DOAn5jk4DSS",
        "token": "",
        "log": {
            "enable": false,
            "topics": []
        }
    },
    "ffmpeg": {
        "binary": "ffmpeg",
        "max_processes": 0,
        "access": {
            "input": {
                "allow": [],
                "block": []
            },
            "output": {
                "allow": [],
                "block": []
            }
        },
        "log": {
            "max_lines": 50,
            "max_history": 3
        }
    },
    "playout": {
        "enable": false,
        "min_port": 0,
        "max_port": 0
    },
    "debug": {
        "profiling": false,
        "force_gc": 0
    },
    "metrics": {
        "enable": false,
        "enable_prometheus": false,
        "range_sec": 300,
        "interval_sec": 2
    },
    "sessions": {
        "enable": true,
        "ip_ignorelist": [
            "127.0.0.1/32",
            "::1/128"
        ],
        "session_timeout_sec": 30,
        "persist": true,
        "persist_interval_sec": 300,
        "max_bitrate_mbit": 250,
        "max_sessions": 50
    },
    "service": {
        "enable": false,
        "token": "",
        "url": "https://service.datarhei.com"
    },
    "router": {
        "blocked_prefixes": [
            "/api"
        ],
        "routes": {},
        "ui_path": "/core/ui"
    }
}

Required fields: version

Descriptions

Logo
Logo
Configuration
Logo

Prometheus metrics

Metrics for the processes and other aspects are provided for a Prometheus scraper on /metrics.

You have to set metrics.enable and metrics.enable_prometheus to true in the settings

Currently, available metrics are:

Metric
Type
Dimensions
Description

ffmpeg_process

gauge

core, process, name

General stats per process.

ffmpeg_process_io

gauge

core, process, type, id, index, media, name

Stats per input and output of a process.

mem_limit_bytes

gauge

core

Total available memory in bytes.

mem_free_bytes

gauge

core

Free memory in bytes.

net_rx_bytes

gauge

core, interface

Number of received bytes by interface.

net_tx_bytes

gauge

core, interface

Number of sent bytes by interface.

cpus_system_time_secs

gauge

core, cpu

System time per CPU in seconds.

cpus_user_time_secs

gauge

core, cpu

User time per CPU in seconds.

cpus_idle_time_secs

gauge

core, cpu

Idle time per CPU in seconds.

session_total

counter

core, collector

Total number of sessions by collector.

session_active

gauge

core, collector

Current number of active sessions by collector.

session_rx_bytes

counter

core, collector

Total received bytes by collector.

session_tx_bytes

counter

core, collector

Total sent bytes by collector.

Logo
Logo

Metrics

Query the collected metrics.

The core can collect metrics about itself, the system it is running on, and about the FFmpeg processes it is executing. This is not enabled by default. Please check the for how to enable it and how often metrics should be collected and for how long metrics should be kept available for querying.

Each metric is collected by a collector, like a topic. Each collector can contain several metrics and each metric can have labels to describe a variant of that metric. Think of used space on a filesystem where the variant is whether it is a disk filesystem or a memory filesystem.

All metrics can be scraped by Prometheus from the /metrics endpoint, if enabled.

List collectors

In order to know which metrics are available and to learn what they mean, you can retrieve a list of all metrics, their descriptions and labels.

Example:

Description:

Query metrics

All collected metrics can be queried by sending a query to the /api/v3/metrics endpoint. This query contains the names of the metrics with the labels you are interested in. Leave out the labels in order to get the values for all labels of that metrics. By default you will receive the last collected value. You can also receive a whole timeseries for each metric and label by providing a timerange and stepsize in seconds.

Example:

Description:

Available collectors

CPU
Memory
Disk
Filesystem
Network
FFmpeg
Restream IO
Sessions
Uptime
curl http://127.0.0.1:8080/api/v3/metrics/ \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X POST \
   -d '{
         "metrics": [
            {
               "name": "cpu_idle"
            }, {
               "name": "mem_free"
            }
         ]
      }'
from core_client import Client

client = Client(base_url="http://127.0.0.1:8080")
client.login()

core_metrics = client.v3_metrics_post(
   config={
      "metrics": [
         {
            "name": "cpu_idle"
         }, {
            "name": "mem_free"
         }
      ]
   }
)
print(core_metrics)
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
    "github.com/datarhei/core-client-go/v16/api"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

// get the latest collected value
query := api.MetricsQuery{
    Metrics: []api.MetricsQueryMetric{
        {
            Name: "cpu_idle",
        },
        {
            Name: "mem_free",
        },
    }
}

metrics, err := client.Metrics(query)
fmt.Printf("%+v\n", metrics)

// get the collected values from the last 10 minutes in
// steps of 5 seconds
query := api.MetricsQuery{
    Timerange: 600,
    Interval: 5,
    Metrics: []api.MetricsQueryMetric{
        {
            Name: "cpu_idle",
        },
        {
            Name: "mem_free",
        },
    }
}

metrics, err := client.Metrics(query)
fmt.Printf("%+v\n", metrics)
[
    {
        "name": "cpu_idle",
        "description": "Percentage of idle CPU",
        "labels": []
    },
    {
        "name": "cpu_ncpu",
        "description": "Number of logical CPUs in the system",
        "labels": []
    },
    {
        "name": "cpu_other",
        "description": "Percentage of CPU used for other subsystems",
        "labels": []
    },
    {
        "name": "cpu_system",
        "description": "Percentage of CPU used for the system",
        "labels": []
    },
    {
        "name": "cpu_user",
        "description": "Percentage of CPU used for the user",
        "labels": []
    }
]
[
    {
        "name": "mem_free",
        "description": "Free memory in bytes",
        "labels": []
    },
    {
        "name": "mem_total",
        "description": "Total available memory in bytes",
        "labels": []
    }
]
[
    {
        "name": "disk_total",
        "description": "Total size of the disk in bytes",
        "labels": [
            "path"
        ]
    },
    {
        "name": "disk_usage",
        "description": "Number of used bytes on the disk",
        "labels": [
            "path"
        ]
    }
]
[
    {
        "name": "filesystem_files",
        "description": "Number of files on the filesystem (excluding directories)",
        "labels": [
            "name"
        ]
    },
    {
        "name": "filesystem_limit",
        "description": "Total size of the filesystem in bytes, negative if unlimited",
        "labels": [
            "name"
        ]
    },
    {
        "name": "filesystem_usage",
        "description": "Number of used bytes on the filesystem",
        "labels": [
            "name"
        ]
    }
]jso
[
    {
        "name": "net_rx",
        "description": "Number of received bytes",
        "labels": [
            "interface"
        ]
    },
    {
        "name": "net_tx",
        "description": "Number of transmitted bytes",
        "labels": [
            "interface"
        ]
    }
]
[{
    "name": "ffmpeg_process",
    "description": "State of the ffmpeg process",
    "labels": [
        "state"
    ]
}]
[
    {
        "name": "restream_io",
        "description": "Current process IO values by name",
        "labels": [
            "processid",
            "type",
            "id",
            "address",
            "index",
            "stream",
            "media",
            "name"
        ]
    },
    {
        "name": "restream_process",
        "description": "Current process values by name",
        "labels": [
            "processid",
            "state",
            "order",
            "name"
        ]
    },
    {
        "name": "restream_process_states",
        "description": "Current process state",
        "labels": [
            "processid",
            "state"
        ]
    },
    {
        "name": "restream_state",
        "description": "Summarized process states",
        "labels": [
            "state"
        ]
    }
]
[
    {
        "name": "session_active",
        "description": "Number of current sessions",
        "labels": [
            "collector"
        ]
    },
    {
        "name": "session_limit",
        "description": "Max. number of concurrent sessions",
        "labels": [
            "collector"
        ]
    },
    {
        "name": "session_maxrxbitrate",
        "description": "Max. allowed receiving bitrate in bit per second",
        "labels": [
            "collector"
        ]
    },
    {
        "name": "session_maxtxbitrate",
        "description": "Max. allowed transmitting bitrate in bit per second",
        "labels": [
            "collector"
        ]
    },
    {
        "name": "session_rxbitrate",
        "description": "Current receiving bitrate in bit per second",
        "labels": [
            "collector"
        ]
    },
    {
        "name": "session_rxbytes",
        "description": "Number of received bytes",
        "labels": [
            "collector"
        ]
    },
    {
        "name": "session_total",
        "description": "Total sessions",
        "labels": [
            "collector"
        ]
    },
    {
        "name": "session_txbitrate",
        "description": "Current transmitting bitrate in bit per second",
        "labels": [
            "collector"
        ]
    },
    {
        "name": "session_txbytes",
        "description": "Number of transmitted bytes",
        "labels": [
            "collector"
        ]
    }
]
[
    {
        "name": "uptime_uptime",
        "description": "Current uptime in seconds",
        "labels": []
    }
]
metrics configuration
curl http://127.0.0.1:8080/api/v3/metrics \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X GET
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

core_metrics_collection_list = client.v3_metrics_get()
print(core_metrics_collection_list)
import (
    "fmt"
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

metrics, err := client.MetricsList()

for _, m := range metrics {
    fmt.Printf("%+v\n", m)
}
Logo

Application log

get

Get the last log lines of the Restreamer application

Authorizations
Query parameters
formatstringOptional

Format of the list of log events (*console, raw)

Responses
200
application log
application/json
Responsestring[]
get
GET /v3/log HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
200

application log

[
  "text"
]

List all publishing RTMP streams

get

List all currently publishing RTMP streams.

Authorizations
Responses
200
OK
application/json
get
GET /v3/rtmp HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
200

OK

[
  {
    "name": "text"
  }
]

Issue a command to a process

put

Issue a command to a process: start, stop, reload, restart

Authorizations
Path parameters
idstringRequired

Process ID

Body
commandstring · enumRequiredPossible values:
Responses
200
OK
application/json
Responsestring
400
Bad Request
application/json
404
Not Found
application/json
put
PUT /v3/process/{id}/command HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 19

{
  "command": "start"
}
text

Add JSON metadata with a process under the given key

put

Add arbitrary JSON metadata under the given key. If the key exists, all already stored metadata with this key will be overwritten. If the key doesn't exist, it will be created.

Authorizations
Path parameters
idstringRequired

Process ID

keystringRequired

Key for data store

Body
anyOptional
Responses
200
OK
application/json
Responseany
400
Bad Request
application/json
404
Not Found
application/json
put
PUT /v3/process/{id}/metadata/{key} HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*

No content

Retrieve JSON metadata stored with a process under a key

get

Retrieve the previously stored JSON metadata under the given key. If the key is empty, all metadata will be returned.

Authorizations
Path parameters
idstringRequired

Process ID

keystringRequired

Key for data store

Responses
200
OK
application/json
Responseany
400
Bad Request
application/json
404
Not Found
application/json
get
GET /v3/process/{id}/metadata/{key} HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*

No content

FFmpeg capabilities

get

List all detected FFmpeg capabilities.

Authorizations
Responses
200
OK
application/json
get
GET /v3/skills HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
200

OK

{
  "codecs": {
    "audio": [
      {
        "decoders": [
          "text"
        ],
        "encoders": [
          "text"
        ],
        "id": "text",
        "name": "text"
      }
    ],
    "subtitle": [
      {
        "decoders": [
          "text"
        ],
        "encoders": [
          "text"
        ],
        "id": "text",
        "name": "text"
      }
    ],
    "video": [
      {
        "decoders": [
          "text"
        ],
        "encoders": [
          "text"
        ],
        "id": "text",
        "name": "text"
      }
    ]
  },
  "devices": {
    "demuxers": [
      {
        "devices": [
          {
            "extra": "text",
            "id": "text",
            "media": "text",
            "name": "text"
          }
        ],
        "id": "text",
        "name": "text"
      }
    ],
    "muxers": [
      {
        "devices": [
          {
            "extra": "text",
            "id": "text",
            "media": "text",
            "name": "text"
          }
        ],
        "id": "text",
        "name": "text"
      }
    ]
  },
  "ffmpeg": {
    "compiler": "text",
    "configuration": "text",
    "libraries": [
      {
        "compiled": "text",
        "linked": "text",
        "name": "text"
      }
    ],
    "version": "text"
  },
  "filter": [
    {
      "id": "text",
      "name": "text"
    }
  ],
  "formats": {
    "demuxers": [
      {
        "id": "text",
        "name": "text"
      }
    ],
    "muxers": [
      {
        "id": "text",
        "name": "text"
      }
    ]
  },
  "hwaccels": [
    {
      "id": "text",
      "name": "text"
    }
  ],
  "protocols": {
    "input": [
      {
        "id": "text",
        "name": "text"
      }
    ],
    "output": [
      {
        "id": "text",
        "name": "text"
      }
    ]
  }
}

Refresh FFmpeg capabilities

get

Refresh the available FFmpeg capabilities.

Authorizations
Responses
200
OK
application/json
get
GET /v3/skills/reload HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
200

OK

{
  "codecs": {
    "audio": [
      {
        "decoders": [
          "text"
        ],
        "encoders": [
          "text"
        ],
        "id": "text",
        "name": "text"
      }
    ],
    "subtitle": [
      {
        "decoders": [
          "text"
        ],
        "encoders": [
          "text"
        ],
        "id": "text",
        "name": "text"
      }
    ],
    "video": [
      {
        "decoders": [
          "text"
        ],
        "encoders": [
          "text"
        ],
        "id": "text",
        "name": "text"
      }
    ]
  },
  "devices": {
    "demuxers": [
      {
        "devices": [
          {
            "extra": "text",
            "id": "text",
            "media": "text",
            "name": "text"
          }
        ],
        "id": "text",
        "name": "text"
      }
    ],
    "muxers": [
      {
        "devices": [
          {
            "extra": "text",
            "id": "text",
            "media": "text",
            "name": "text"
          }
        ],
        "id": "text",
        "name": "text"
      }
    ]
  },
  "ffmpeg": {
    "compiler": "text",
    "configuration": "text",
    "libraries": [
      {
        "compiled": "text",
        "linked": "text",
        "name": "text"
      }
    ],
    "version": "text"
  },
  "filter": [
    {
      "id": "text",
      "name": "text"
    }
  ],
  "formats": {
    "demuxers": [
      {
        "id": "text",
        "name": "text"
      }
    ],
    "muxers": [
      {
        "id": "text",
        "name": "text"
      }
    ]
  },
  "hwaccels": [
    {
      "id": "text",
      "name": "text"
    }
  ],
  "protocols": {
    "input": [
      {
        "id": "text",
        "name": "text"
      }
    ],
    "output": [
      {
        "id": "text",
        "name": "text"
      }
    ]
  }
}

Fetch minimal statistics about a process

get

Fetch minimal statistics about a process, which is not protected by any auth.

Path parameters
idstringRequired

ID of a process

Responses
200
OK
application/json
404
Not Found
application/json
get
GET /v3/widget/process/{id} HTTP/1.1
Host: api
Accept: */*
{
  "current_sessions": 1,
  "total_sessions": 1,
  "uptime": 1
}

Retrieve an access and a refresh token

post

Retrieve valid JWT access and refresh tokens to use for accessing the API. Login either by username/password or Auth0 token

Authorizations
Body
passwordstringRequired
usernamestringRequired
Responses
200
OK
application/json
400
Bad Request
application/json
403
Forbidden
application/json
500
Internal Server Error
application/json
post
POST /login HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 37

{
  "password": "text",
  "username": "text"
}
{
  "access_token": "text",
  "refresh_token": "text"
}

Retrieve an access and a refresh token

post

Retrieve valid JWT access and refresh tokens to use for accessing the API. Login either by username/password or Auth0 token

Authorizations
Body
passwordstringRequired
usernamestringRequired
Responses
200
OK
application/json
400
Bad Request
application/json
403
Forbidden
application/json
500
Internal Server Error
application/json
post
POST /login HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 37

{
  "password": "text",
  "username": "text"
}
{
  "access_token": "text",
  "refresh_token": "text"
}

Retrieve a new access token

get

Retrieve a new access token by providing the refresh token

Authorizations
Responses
200
OK
application/json
500
Internal Server Error
application/json
get
GET /login/refresh HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
{
  "access_token": "text"
}

Probe a process

get

Probe an existing process to get a detailed stream information on the inputs.

Authorizations
Path parameters
idstringRequired

Process ID

Responses
200
OK
application/json
get
GET /v3/process/{id}/probe HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
200

OK

{
  "log": [
    "text"
  ],
  "streams": [
    {
      "bitrate_kbps": 1,
      "channels": 1,
      "codec": "text",
      "coder": "text",
      "duration_sec": 1,
      "format": "text",
      "fps": 1,
      "height": 1,
      "index": 1,
      "language": "text",
      "layout": "text",
      "pix_fmt": "text",
      "sampling_hz": 1,
      "stream": 1,
      "type": "text",
      "url": "text",
      "width": 1
    }
  ]
}

List all publishing SRT treams

get

List all currently publishing SRT streams. This endpoint is EXPERIMENTAL and may change in future.

Authorizations
Responses
200
OK
application/json
get
GET /v3/srt HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
200

OK

[
  {
    "connections": {
      "ANY_ADDITIONAL_PROPERTY": {
        "log": {
          "ANY_ADDITIONAL_PROPERTY": [
            {
              "msg": [
                "text"
              ],
              "ts": 1
            }
          ]
        },
        "stats": {
          "avail_recv_buf_bytes": 1,
          "avail_send_buf_bytes": 1,
          "bandwidth_mbit": 1,
          "flight_size_pkt": 1,
          "flow_window_pkt": 1,
          "max_bandwidth_mbit": 1,
          "mss_bytes": 1,
          "pkt_recv_avg_belated_time_ms": 1,
          "pkt_send_period_us": 1,
          "recv_ack_pkt": 1,
          "recv_buf_bytes": 1,
          "recv_buf_ms": 1,
          "recv_buf_pkt": 1,
          "recv_bytes": 1,
          "recv_drop_bytes": 1,
          "recv_drop_pkt": 1,
          "recv_km_pkt": 1,
          "recv_loss_bytes": 1,
          "recv_loss_pkt": 1,
          "recv_nak_pkt": 1,
          "recv_pkt": 1,
          "recv_retran_pkts": 1,
          "recv_tsbpd_delay_ms": 1,
          "recv_undecrypt_bytes": 1,
          "recv_undecrypt_pkt": 1,
          "recv_unique_bytes": 1,
          "recv_unique_pkt": 1,
          "reorder_tolerance_pkt": 1,
          "rtt_ms": 1,
          "send_buf_bytes": 1,
          "send_buf_ms": 1,
          "send_buf_pkt": 1,
          "send_drop_bytes": 1,
          "send_drop_pkt": 1,
          "send_duration_us": 1,
          "send_km_pkt": 1,
          "send_loss_pkt": 1,
          "send_tsbpd_delay_ms": 1,
          "sent_ack_pkt": 1,
          "sent_bytes": 1,
          "sent_nak_pkt": 1,
          "sent_pkt": 1,
          "sent_retrans_bytes": 1,
          "sent_retrans_pkt": 1,
          "sent_unique_bytes": 1,
          "sent_unique_pkt": 1,
          "timestamp_ms": 1
        }
      }
    },
    "log": {
      "ANY_ADDITIONAL_PROPERTY": [
        {
          "msg": [
            "text"
          ],
          "ts": 1
        }
      ]
    },
    "publisher": {
      "ANY_ADDITIONAL_PROPERTY": 1
    },
    "subscriber": {
      "ANY_ADDITIONAL_PROPERTY": [
        1
      ]
    }
  }
]

Get the logs of a process

get

Get the logs and the log history of a process.

Authorizations
Path parameters
idstringRequired

Process ID

Responses
200
OK
application/json
400
Bad Request
application/json
404
Not Found
application/json
get
GET /v3/process/{id}/report HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
{
  "created_at": 1,
  "history": [
    {
      "created_at": 1,
      "log": [
        [
          "text"
        ]
      ],
      "prelude": [
        "text"
      ]
    }
  ],
  "log": [
    [
      "text"
    ]
  ],
  "prelude": [
    "text"
  ]
}

Fetch a file from the memory filesystem

get

Fetch a file from the memory filesystem

Path parameters
pathstringRequired

Path to file

Responses
200
OK
Responsestring
301
Moved Permanently
404
Not Found
get
GET /{path} HTTP/1.1
Host: memfs
Accept: */*
text

Add a file to the memory filesystem

put

Writes or overwrites a file on the memory filesystem

Authorizations
Path parameters
pathstringRequired

Path to file

Bodyinteger[]
integer[]Optional
Responses
201
Created
Responsestring
204
No Content
507
Insufficient Storage
put
PUT /{path} HTTP/1.1
Host: memfs
Authorization: Basic username:password
Content-Type: application/data
Accept: */*
Content-Length: 3

[
  1
]
text

Remove a file from the memory filesystem

delete

Remove a file from the memory filesystem

Authorizations
Path parameters
pathstringRequired

Path to file

Responses
200
OK
text/plain
Responsestring
404
Not Found
text/plain
delete
DELETE /{path} HTTP/1.1
Host: memfs
Authorization: Basic username:password
Accept: */*
text

Add a file to the memory filesystem

put

Writes or overwrites a file on the memory filesystem

Authorizations
Path parameters
pathstringRequired

Path to file

Bodyinteger[]
integer[]Optional
Responses
201
Created
Responsestring
204
No Content
507
Insufficient Storage
put
PUT /v3/fs/mem/{path} HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Content-Type: application/data
Accept: */*
Content-Length: 3

[
  1
]
text

List all files on the memory filesystem

get

List all files on the memory filesystem. The listing can be ordered by name, size, or date of last modification in ascending or descending order.

Authorizations
Query parameters
globstringOptional

glob pattern for file names

sortstringOptional

none, name, size, lastmod

orderstringOptional

asc, desc

Responses
200
OK
application/json
get
GET /v3/fs/mem HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
200

OK

[
  {
    "last_modified": 1,
    "name": "text",
    "size_bytes": 1
  }
]

Fetch a file from the memory filesystem

get

Fetch a file from the memory filesystem

Authorizations
Path parameters
pathstringRequired

Path to file

Responses
200
OK
Responsestring
301
Moved Permanently
404
Not Found
get
GET /v3/fs/mem/{path} HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
text

Create a link to a file in the memory filesystem

patch

Create a link to a file in the memory filesystem. The file linked to has to exist.

Authorizations
Path parameters
pathstringRequired

Path to file

Body
stringOptional
Responses
201
Created
Responsestring
400
Bad Request
patch
PATCH /v3/fs/mem/{path} HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Content-Type: application/data
Accept: */*
Content-Length: 6

"text"
text

Remove a file from the memory filesystem

delete

Remove a file from the memory filesystem

Authorizations
Path parameters
pathstringRequired

Path to file

Responses
200
OK
text/plain
Responsestring
404
Not Found
text/plain
delete
DELETE /v3/fs/mem/{path} HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
text

Get the state of a process

get

Get the state and progress data of a process.

Authorizations
Path parameters
idstringRequired

Process ID

Responses
200
OK
application/json
400
Bad Request
application/json
404
Not Found
application/json
get
GET /v3/process/{id}/state HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
{
  "command": [
    "text"
  ],
  "cpu_usage": 1,
  "exec": "text",
  "last_logline": "text",
  "memory_bytes": 1,
  "order": "text",
  "progress": {
    "bitrate_kbit": 1,
    "drop": 1,
    "dup": 1,
    "fps": 1,
    "frame": 1,
    "inputs": [
      {
        "address": "text",
        "avstream": {
          "aqueue": 1,
          "drop": 1,
          "dup": 1,
          "duplicating": true,
          "enc": 1,
          "gop": "text",
          "input": {
            "packet": 1,
            "size_kb": 1,
            "state": "running",
            "time": 1
          },
          "looping": true,
          "output": {
            "packet": 1,
            "size_kb": 1,
            "state": "running",
            "time": 1
          },
          "queue": 1
        },
        "bitrate_kbit": 1,
        "channels": 1,
        "codec": "text",
        "coder": "text",
        "format": "text",
        "fps": 1,
        "frame": 1,
        "height": 1,
        "id": "text",
        "index": 1,
        "layout": "text",
        "packet": 1,
        "pix_fmt": "text",
        "pps": 1,
        "q": 1,
        "sampling_hz": 1,
        "size_kb": 1,
        "stream": 1,
        "type": "text",
        "width": 1
      }
    ],
    "outputs": [
      {
        "address": "text",
        "avstream": {
          "aqueue": 1,
          "drop": 1,
          "dup": 1,
          "duplicating": true,
          "enc": 1,
          "gop": "text",
          "input": {
            "packet": 1,
            "size_kb": 1,
            "state": "running",
            "time": 1
          },
          "looping": true,
          "output": {
            "packet": 1,
            "size_kb": 1,
            "state": "running",
            "time": 1
          },
          "queue": 1
        },
        "bitrate_kbit": 1,
        "channels": 1,
        "codec": "text",
        "coder": "text",
        "format": "text",
        "fps": 1,
        "frame": 1,
        "height": 1,
        "id": "text",
        "index": 1,
        "layout": "text",
        "packet": 1,
        "pix_fmt": "text",
        "pps": 1,
        "q": 1,
        "sampling_hz": 1,
        "size_kb": 1,
        "stream": 1,
        "type": "text",
        "width": 1
      }
    ],
    "packet": 1,
    "q": 1,
    "size_kb": 1,
    "speed": 1,
    "time": 1
  },
  "reconnect_seconds": 1,
  "runtime_seconds": 1
}

Retrieve the currently active Restreamer configuration

get

Retrieve the currently active Restreamer configuration

Authorizations
Responses
200
OK
application/json
get
GET /v3/config HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
200

OK

{
  "config": {
    "address": "text",
    "api": {
      "access": {
        "http": {
          "allow": [
            "text"
          ],
          "block": [
            "text"
          ]
        },
        "https": {
          "allow": [
            "text"
          ],
          "block": [
            "text"
          ]
        }
      },
      "auth": {
        "auth0": {
          "enable": true,
          "tenants": [
            {
              "audience": "text",
              "clientid": "text",
              "domain": "text",
              "users": [
                "text"
              ]
            }
          ]
        },
        "disable_localhost": true,
        "enable": true,
        "jwt": {
          "secret": "text"
        },
        "password": "text",
        "username": "text"
      },
      "read_only": true
    },
    "created_at": "text",
    "db": {
      "dir": "text"
    },
    "debug": {
      "force_gc": 1,
      "profiling": true
    },
    "ffmpeg": {
      "access": {
        "input": {
          "allow": [
            "text"
          ],
          "block": [
            "text"
          ]
        },
        "output": {
          "allow": [
            "text"
          ],
          "block": [
            "text"
          ]
        }
      },
      "binary": "text",
      "log": {
        "max_history": 1,
        "max_lines": 1
      },
      "max_processes": 1
    },
    "host": {
      "auto": true,
      "name": [
        "text"
      ]
    },
    "id": "text",
    "log": {
      "level": "debug",
      "max_lines": 1,
      "topics": [
        "text"
      ]
    },
    "metrics": {
      "enable": true,
      "enable_prometheus": true,
      "interval_sec": 1,
      "range_sec": 1
    },
    "name": "text",
    "playout": {
      "enable": true,
      "max_port": 1,
      "min_port": 1
    },
    "router": {
      "blocked_prefixes": [
        "text"
      ],
      "routes": {
        "ANY_ADDITIONAL_PROPERTY": "text"
      },
      "ui_path": "text"
    },
    "rtmp": {
      "address": "text",
      "address_tls": "text",
      "app": "text",
      "enable": true,
      "enable_tls": true,
      "token": "text"
    },
    "service": {
      "enable": true,
      "token": "text",
      "url": "text"
    },
    "sessions": {
      "enable": true,
      "ip_ignorelist": [
        "text"
      ],
      "max_bitrate_mbit": 1,
      "max_sessions": 1,
      "persist": true,
      "persist_interval_sec": 1,
      "session_timeout_sec": 1
    },
    "srt": {
      "address": "text",
      "enable": true,
      "log": {
        "enable": true,
        "topics": [
          "text"
        ]
      },
      "passphrase": "text",
      "token": "text"
    },
    "storage": {
      "cors": {
        "origins": [
          "text"
        ]
      },
      "disk": {
        "cache": {
          "enable": true,
          "max_file_size_mbytes": 1,
          "max_size_mbytes": 1,
          "ttl_seconds": 1,
          "types": {
            "allow": [
              "text"
            ],
            "block": [
              "text"
            ]
          }
        },
        "dir": "text",
        "max_size_mbytes": 1
      },
      "memory": {
        "auth": {
          "enable": true,
          "password": "text",
          "username": "text"
        },
        "max_size_mbytes": 1,
        "purge": true
      },
      "mimetypes_file": "text"
    },
    "tls": {
      "address": "text",
      "auto": true,
      "cert_file": "text",
      "email": "text",
      "enable": true,
      "key_file": "text"
    },
    "update_check": true,
    "version": 1
  },
  "created_at": "text",
  "loaded_at": "text",
  "overrides": [
    "text"
  ],
  "updated_at": "text"
}

Update the current Restreamer configuration

put

Update the current Restreamer configuration by providing a complete or partial configuration. Fields that are not provided will not be changed.

Authorizations
Body
addressstringOptional
created_atstringOptional
idstringOptional
namestringOptional
update_checkbooleanOptional
versionintegerOptional
Responses
200
OK
application/json
Responsestring
400
Bad Request
application/json
409
Conflict
application/json
put
PUT /v3/config HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 1874

{
  "address": "text",
  "api": {
    "access": {
      "http": {
        "allow": [
          "text"
        ],
        "block": [
          "text"
        ]
      },
      "https": {
        "allow": [
          "text"
        ],
        "block": [
          "text"
        ]
      }
    },
    "auth": {
      "auth0": {
        "enable": true,
        "tenants": [
          {
            "audience": "text",
            "clientid": "text",
            "domain": "text",
            "users": [
              "text"
            ]
          }
        ]
      },
      "disable_localhost": true,
      "enable": true,
      "jwt": {
        "secret": "text"
      },
      "password": "text",
      "username": "text"
    },
    "read_only": true
  },
  "created_at": "text",
  "db": {
    "dir": "text"
  },
  "debug": {
    "force_gc": 1,
    "profiling": true
  },
  "ffmpeg": {
    "access": {
      "input": {
        "allow": [
          "text"
        ],
        "block": [
          "text"
        ]
      },
      "output": {
        "allow": [
          "text"
        ],
        "block": [
          "text"
        ]
      }
    },
    "binary": "text",
    "log": {
      "max_history": 1,
      "max_lines": 1
    },
    "max_processes": 1
  },
  "host": {
    "auto": true,
    "name": [
      "text"
    ]
  },
  "id": "text",
  "log": {
    "level": "debug",
    "max_lines": 1,
    "topics": [
      "text"
    ]
  },
  "metrics": {
    "enable": true,
    "enable_prometheus": true,
    "interval_sec": 1,
    "range_sec": 1
  },
  "name": "text",
  "playout": {
    "enable": true,
    "max_port": 1,
    "min_port": 1
  },
  "router": {
    "blocked_prefixes": [
      "text"
    ],
    "routes": {
      "ANY_ADDITIONAL_PROPERTY": "text"
    },
    "ui_path": "text"
  },
  "rtmp": {
    "address": "text",
    "address_tls": "text",
    "app": "text",
    "enable": true,
    "enable_tls": true,
    "token": "text"
  },
  "service": {
    "enable": true,
    "token": "text",
    "url": "text"
  },
  "sessions": {
    "enable": true,
    "ip_ignorelist": [
      "text"
    ],
    "max_bitrate_mbit": 1,
    "max_sessions": 1,
    "persist": true,
    "persist_interval_sec": 1,
    "session_timeout_sec": 1
  },
  "srt": {
    "address": "text",
    "enable": true,
    "log": {
      "enable": true,
      "topics": [
        "text"
      ]
    },
    "passphrase": "text",
    "token": "text"
  },
  "storage": {
    "cors": {
      "origins": [
        "text"
      ]
    },
    "disk": {
      "cache": {
        "enable": true,
        "max_file_size_mbytes": 1,
        "max_size_mbytes": 1,
        "ttl_seconds": 1,
        "types": {
          "allow": [
            "text"
          ],
          "block": [
            "text"
          ]
        }
      },
      "dir": "text",
      "max_size_mbytes": 1
    },
    "memory": {
      "auth": {
        "enable": true,
        "password": "text",
        "username": "text"
      },
      "max_size_mbytes": 1,
      "purge": true
    },
    "mimetypes_file": "text"
  },
  "tls": {
    "address": "text",
    "auto": true,
    "cert_file": "text",
    "email": "text",
    "enable": true,
    "key_file": "text"
  },
  "update_check": true,
  "version": 1
}
text

Add a file to a filesystem

put

Writes or overwrites a file on a filesystem

Authorizations
Path parameters
namestringRequired

Name of the filesystem

pathstringRequired

Path to file

Bodyinteger[]
integer[]Optional
Responses
201
Created
Responsestring
204
No Content
507
Insufficient Storage
put
PUT /v3/fs/{name}/{path} HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Content-Type: application/data
Accept: */*
Content-Length: 3

[
  1
]
text

List all files on a filesystem

get

List all files on a filesystem. The listing can be ordered by name, size, or date of last modification in ascending or descending order.

Authorizations
Path parameters
namestringRequired

Name of the filesystem

Query parameters
globstringOptional

glob pattern for file names

sortstringOptional

none, name, size, lastmod

orderstringOptional

asc, desc

Responses
200
OK
application/json
get
GET /v3/fs/{name} HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
200

OK

[
  {
    "last_modified": 1,
    "name": "text",
    "size_bytes": 1
  }
]

Fetch a file from a filesystem

get

Fetch a file from a filesystem

Authorizations
Path parameters
namestringRequired

Name of the filesystem

pathstringRequired

Path to file

Responses
200
OK
Responsestring
301
Moved Permanently
404
Not Found
get
GET /v3/fs/{name}/{path} HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
text

Remove a file from a filesystem

delete

Remove a file from a filesystem

Authorizations
Path parameters
namestringRequired

Name of the filesystem

pathstringRequired

Path to file

Responses
200
OK
text/plain
Responsestring
404
Not Found
text/plain
delete
DELETE /v3/fs/{name}/{path} HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
text

Reload the currently active configuration

get

Reload the currently active configuration. This will trigger a restart of the Core.

Authorizations
Responses
200
OK
application/json
Responsestring
get
GET /v3/config/reload HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
200

OK

text

Process

Manage FFmpeg processes

The /api/v3/process family of API call will let you manage and monitor FFmpeg processes by the datarhei Core.

An FFmpeg process definition, as required by the API, consists of its inputs and its outputs and global options. That's a very minimalistic abstraction of the FFmpeg command line and assumes that you know the command line options in order to achieve what you want.

The most minimal process definition is:

This will be translated to the FFmpeg command line:

Let's use this as a starting point for a more practical example. We want to generate a test video with silence audio, encode it to H.264 and AAC and finally send it via RTMP to some destination.

Identification

You can give each process an ID with the field id. There are no restrictions regarding the format or allowed characters. If you don't provide an ID, one will be generated for you. The ID is used to identify the process later in API calls for querying and modifying the process after it has been created. An ID has to be unique for each datarhei Core.

Additionally to the ID you can provide a reference with the field reference. This allows you provide any information with the process. It will not be interpreted at all. You can use it for e.g. grouping different processes together.

Inputs

First, we define the inputs:

This will be translated to the FFmpeg command line:

The id for each input is optional and will be used for later reference or for replacing placeholders (more about placeholders later). If you don't provide an ID for an input, it will be generated for you.

Outputs

Next, we define the output. Because both the inputs are raw video and raw audio data, we need to encode them.

This will be translated to the FFmpeg command line:

Putting it all together:

All this together results in the command line:

Global options

FFmpeg is quite talkative by default and we want to be notified only about errors. We can add global options that will be placed before any inputs:

The inputs and outputs are left out for the sake of brevity. Now we have out FFmpeg command line complete:

Control

The process config also allows you to control what happens, when you create the process, what happens after the process finishes:

The reconnect option tells the datarhei Core to restart the process in case it finished (either normally or because of an error). It will wait for reconnect_delay_seconds until the process will be restarted. Set this value to 0 in order to restart the process immediately.

The autostart option will cause the process to be started as soon it has been created. This is equivalent to setting this option to false, creating the process, and issuing the start .

The stale_timeout_seconds will cause the process to be (forcefully) stopped in case it stales, i.e. no packets are processed for this amount of seconds. Disable this feature by setting the value to 0.

Limits

The datarhei Core is constantly monitoring the vitals of each process. This also includes the memory and CPU consumption. If you have limited resource or you want to have an upper limit for the resources a process is allowed to consume, you can set the limit options in the process config:

The cpu_usage option sets the limit of CPU usage in percent, e.g. not more than 10% of the CPU should be used for this process. A value of 0 (the default) will disable this option.

The memory_mbytes options set the limit of memory usage in megabytes, e.g. not more than 50 megabytes of memory should be used. A value of 0 (the default) will disable this option.

If the resource consumption for at least one of the limits is exceeded for waitfor_seconds, then the process will be forcefully terminated.

Cleanup

For long running processes that produce a lot of files (e.g. a HLS live stream), it can happen that not all created files are removed by the ffmpeg process itself. Or if the process exits and doesn't or can't cleanup any files it created. This leaves files on filesystem that shouldn't be there and just using up space.

With the optional array of cleanup rules for each output, it is possible to define rules for removing files from the memory filesystem or disk. Each rule consists of a glob pattern and a max. allowed number of files matching that pattern or permitted maximum age for the files matching that pattern. The pattern starts with either memfs: or diskfs: depending on which filesystem this rule is designated to. Then a glob pattern follows to identify the files. If max_files is set to a number larger than 0, then the oldest files from the matching files will be deleted if the list of matching files is longer than that number. If max_file_age_seconds is set to a number larger than 0, then all files that are older than this number of seconds from the matching files will be deleted. If purge_on_delete is set to true, then all matching files will be deleted when the process is deleted.

As of version 16.12.0 the prefixes for selecting the filesystem (e.g. diskfs: or memfs:) correspond to the configured name of the filesystem, in case you mounted one or more S3 filesystems. Reserved names are disk, diskfs, mem, and memfs. The names disk and mem are synonyms for diskfs, resp. memfs. E.g. if you have a S3 filesystem with the name aws mounted, use the aws: prefix.

Optional cleanup configuration:

With the pattern parameter you can select files based on a , with the addition of the ** placeholder to include multiple subdirectories, e.g. selecting all .ts files in the root directory has the pattern /*.ts, selecting all .ts file in the whole filesystem has the pattern /**.ts.

As part of the pattern you can use placeholders.

Examples:

The file on the disk with the ID of the process as the name and the extension .m3u8.

All files on disk whose names starts with the ID of the process and that have the extension .m3u8 or .ts.

All files on disk that have the extension .ts and are in the folder structure denoted by the process' reference, ID, and the ID of the output this cleanup rule belongs to.

All files whose name starts with the reference of the process, e.g. /abc_1.ts, but not /abc/1.ts.

All files whose name or path starts with the reference of the process, e.g. /abc_1.ts, /abc/1.ts, or /abc_data/foobar/42.txt.

References

References allow you to refer to an output of another process and to use it as input. The address of the input has to be in the form #[processid]:output=[id], where [processid] denotes the ID of the process you want to use the output from, and [id] denotes the ID of output of that process.

Example:

The second process will use rtmp://someip/live/stream as its input address.

Placeholder

Placeholders are a way to parametrize parts of the config. A placeholder is surrounded by curly braces, e.g. {processid}.

Some placeholder require parameters. Add parameters to a placeholder by appending a comma separated list of key/values, e.g. {placeholder,key1=value1,key2=value2}. This can be combined with escaping.

As of version 16.12.0 the value for a parameter of a placeholder can be a variable. Currently known variables are $processid and $reference. These will be replaced by the respective process ID and process reference, e.g. {rtmp,name=$processid.stream}.

Example:

Assume you have an input process that gets encoded in three different resolutions that are written to the disk. With placeeholders you parametrize the output files. The input and the encoding options are left out for brevity.

This will create three files with the names a_b_360.mp4, a_b_720.mp4, and a_b_1080.mp4 to the directory as defined in .

In case you use a placeholder in a place where characters needs escaping (e.g. in the options of the tee output muxer), you can define the character to be escaped in the placeholder by adding it to the placeholder name and prefix it with a ^.

Example: you have a process with the ID abc:snapshot and in a filter option you have to escape all : in the value for the {processid} placeholder, write {processid^:}. It will then be replaced by abc\:snapshot. The escape character is always \. In case there are \ in the value, they will also get escaped.

All known placeholders are:

{processid}

Will be replaced by the ID of the process. Locations where this placeholder can be used: input.id, input.address, input.options, output.id, output.address, output.options, output.cleanup.pattern

{reference}

Will be replaced by the reference of the process. Locations where this placeholder can be used: input.id, input.address, input.options, output.id, output.address, output.options, output.cleanup.pattern

{inputid}

Will be replaced by the ID of the input. Locations where this placeholder can be used: input.address, input.options

{outputid}

Will be replaced by the ID of the output. Locations where this placeholder can be used: output.address, output.options, output.cleanup.pattern

{diskfs}

Will be replaced by the provided value of storage.disk.dir. Locations where this placeholder can be used: options, input.address, input.options, output.address, output.options

As of version 16.12.0 you can use the alternative syntax {fs:disk}.

{memfs}

Will be replaced by the internal base URL of the memory filesystem. This placeholder is convenient if you change, e.g. the listening port of the HTTP server. Then you don't need to modifiy each process configuration where you use the memory filesystem.

Locations where this placeholder can be used: input.address, input.options, output.address, output.options

Example: {memfs}/foobar.m3u8 will be replaced with http://127.0.0.1:8080/memfs/foobar.m3u8 if the datarhei Core is listening on 8080 for HTTP requests.

As of version 16.12.0 you can use the alternative syntax {fs:mem}.

{fs:[name]}

As of version 16.12.0. Will be replaced by the internal base URL of the named filesystem. This placeholder is convenient if you change, e.g. the listening port of the HTTP server. Then you don't need to modifiy each process configuration where you use the memory filesystem.

Locations where this placeholder can be used: input.address, input.options, output.address, output.options

Predefined names are disk and mem. Additional names correspond to the names of the mounted S3 filesystems.

Example: {fs:aws}/foobar.m3u8 will be replaced with http://127.0.0.1:8080/awsfs/foobar.m3u8 if the datarhei Core is listening on 8080 for HTTP requests, and the S3 storage is configured to have name aws and the mountpoin /awsfs.

{rtmp}

Will be replaced by the internal address of the RTMP server. This placeholder is convenient if you change, e.g. the listening port or the app of the RTMP server. Then you don't need to modifiy each process configuration where you use the RTMP server.

Required parameter: name (name of the stream)

Locations where this placeholder can be used: input.address, output.address

Example: {rtmp,name=foobar.stream} will be replaced with rtmp://127.0.0.1:1935/live/foobar.stream?token=abc, if the RTMP server is configured to listen on port 1935, has the app /live and requires the token abc. See the .

{srt}

Will be replaced by the internal address of the SRT server. This placeholder is convenient if you change, e.g. the listening port or the app of the SRT server. Then you don't need to modifiy each process configuration where you use the SRT server.

Required parameters: name (name of the resource), mode (either publish or request)

Locations where this placeholder can be used: input.address, output.address

Example: {srt,name=foobar,mode=request} will be replaced with srt://127.0.0.1:6000?mode=caller&transtype=live&streamid=foobar,mode:request,token=abc&passphrase=1234, if the SRT server is configured to listen on port 6000, requires the token abc and the passphrase 1234. See the .

Requires Core v16.11.0+

As of version 16.12.0 the placeholder accepts the parameter latency and defaults to 20ms.

Create

Create a new process. The ID of the process will be required in order to query or manipulate the process later on. If you don't provide an ID, it will be generated for you. The response of the successful API call includes the process config as it has been stored (including the generated ID).

You can control the process with .

Example:

Description:

Read

These API call lets you query the current state of the processes that are registered on the datarhei Core. For each process there are several aspects available, such as:

  • config The config with which the process has been created.

  • state The current state of the process, e.g. if its currently running and for how long. If a process is running also the progress data is included. This includes a list of all input and output streams with all their vitals (frames, bitrate, codec, ...). .

  • report The logging output from the FFmpeg process and a history of previous runs of the same process. .

  • metadata All metadata associated with this process. .

By default all aspects are included in the process listing. If you are only interested in specific aspects, then you can use the ?filter=... query parameter. Provide it a comma-separated list of aspects and then only those will be included in the response, e.g. ?filter=state,report.

List processes

This API call lists all processes that are registered on the datarhei Core. You can restrict the listed processes by providing

  • a comma-separated list of specific IDs (?id=a,b,c)

  • a reference (?reference=...)

  • a pattern for the matching ID (?idpattern=...)

  • a pattern for the matching references (?refpattern=...)

With the idpattern and refpattern query parameter you can select process IDs and/or references based on a . If you provide a list of specific IDs or a reference and patterns for IDs or references, then the patterns will be matched first. The resulting list of IDs and references is then checked against the provided list of IDs or reference.

Example:

Description:

Process by ID

If you know the ID of the process you want the details about, you can fetch them directly. Here you can apply the filter query parameter regarding the aspects.

Example:

The second paramter of client.Process is the list of aspects. An empty list means all aspects.

Description:

Process config by ID

This endpoint lets you fetch directly the config of a process.

Example:

Description:

Update

You can change the process configuration of an existing process. It doesn't matter if the process to be updated is currently running or not. The current order will be transfered to the updated process.

The new process configuration is not required to have the same ID as the one you're about to replace. After the successful update you have to use the new process ID in order to query or manipulate the process.

The new process configuration is checked for its validity before it will be replace the current process configuration.

As of version 16.12.0 you can provide a partial process config for updates, i.e. you need to PUT only those fields that actually change.

Example:

Client details

Description:

Delete

Delete a process. If the process is currently running, it will be stopped gracefully before it will be removed.

Example:

Description:

Get a summary of all active and past sessions

get

Get a summary of all active and past sessions of the given collector.

Authorizations
Query parameters
collectorsstringOptional

Comma separated list of collectors

Responses
200
Sessions summary
application/json
get
200

Sessions summary

Get a minimal summary of all active sessions

get

Get a minimal summary of all active sessions (i.e. number of sessions, bandwidth).

Authorizations
Query parameters
collectorsstringOptional

Comma separated list of collectors

Responses
200
Active sessions listing
application/json
get
200

Active sessions listing

List all known metrics with their description and labels

get

List all known metrics with their description and labels

Authorizations
Responses
200
OK
application/json
get
200

OK

Query the collected metrics

post

Query the collected metrics

Authorizations
Body
interval_secinteger · int64Optional
timerange_secinteger · int64Optional
Responses
200
OK
application/json
400
Bad Request
application/json
post
GET /v3/session HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
{
  "ANY_ADDITIONAL_PROPERTY": {
    "active": {
      "bandwidth_rx_mbit": 1,
      "bandwidth_tx_mbit": 1,
      "list": [
        {
          "bandwidth_rx_kbit": 1,
          "bandwidth_tx_kbit": 1,
          "bytes_rx": 1,
          "bytes_tx": 1,
          "created_at": 1,
          "extra": "text",
          "id": "text",
          "local": "text",
          "reference": "text",
          "remote": "text"
        }
      ],
      "max_bandwidth_rx_mbit": 1,
      "max_bandwidth_tx_mbit": 1,
      "max_sessions": 1,
      "sessions": 1
    },
    "summary": {
      "local": {
        "ANY_ADDITIONAL_PROPERTY": {
          "sessions": 1,
          "traffic_rx_mb": 1,
          "traffic_tx_mb": 1
        }
      },
      "reference": {
        "ANY_ADDITIONAL_PROPERTY": {
          "sessions": 1,
          "traffic_rx_mb": 1,
          "traffic_tx_mb": 1
        }
      },
      "remote": {
        "ANY_ADDITIONAL_PROPERTY": {
          "local": {
            "ANY_ADDITIONAL_PROPERTY": {
              "sessions": 1,
              "traffic_rx_mb": 1,
              "traffic_tx_mb": 1
            }
          },
          "sessions": 1,
          "traffic_rx_mb": 1,
          "traffic_tx_mb": 1
        }
      },
      "sessions": 1,
      "traffic_rx_mb": 1,
      "traffic_tx_mb": 1
    }
  }
}
GET /v3/session/active HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
{
  "ANY_ADDITIONAL_PROPERTY": [
    {
      "bandwidth_rx_kbit": 1,
      "bandwidth_tx_kbit": 1,
      "bytes_rx": 1,
      "bytes_tx": 1,
      "created_at": 1,
      "extra": "text",
      "id": "text",
      "local": "text",
      "reference": "text",
      "remote": "text"
    }
  ]
}
GET /v3/metrics HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
[
  {
    "description": "text",
    "labels": [
      "text"
    ],
    "name": "text"
  }
]
POST /v3/metrics HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 108

{
  "interval_sec": 1,
  "metrics": [
    {
      "labels": {
        "ANY_ADDITIONAL_PROPERTY": "text"
      },
      "name": "text"
    }
  ],
  "timerange_sec": 1
}
{
  "interval_sec": 1,
  "metrics": [
    {
      "labels": {
        "ANY_ADDITIONAL_PROPERTY": "text"
      },
      "name": "text",
      "values": [
        {
          "ts": "text",
          "value": 1
        }
      ]
    }
  ],
  "timerange_sec": 1
}
{
   "input": [{"address": "input"}],
   "output": [{"address": "output"}]
}
-i input output
{
    "id": "some_id",
    "reference": "some reference"
}
[
    {             
        "id": "video_in",
        "address": "testsrc=size=1280x720:rate=25",
        "options": ["-f", "lavfi", "-re"]
    },
    {             
        "id": "audio_in",
        "address": "anullsrc=r=44100:stereo=25",
        "options": ["-f", "lavfi"]
    },
]
-f lavfi -re -i testsrc=size=1280x720:rate=25 -f lavfi anullsrc=r=44100:stereo=25
[
    {
        "id": "out",
        "address": "rtmp://someip/live/stream",
        "options": [
            "-codec:v", "libx264",
            "-r", "25",
            "-g", "50",
            "-preset:v", "ultrafast",
            "-b:v", "2M",
            "-codec:a", "aac",
            "-b:a", "64k",
            "-f", "flv",
        ]
    }
]
-codec:v libx264 -r 25 -g 50 -preset:v ultrafast -b:v 2M -codec:a aac -b:a 64k -f flv rtmp://someip/live/stream 
{
    "input": [
        {             
            "id": "video_in",
            "address": "testsrc=size=1280x720:rate=25",
            "options": ["-f", "lavfi", "-re"]
        },
        {             
            "id": "audio_in",
            "address": "anullsrc=r=44100:stereo=25",
            "options": ["-f", "lavfi"]
        },
    ],
    "output": [
        {
            "id": "out",
            "address": "rtmp://someip/live/stream",
            "options": [
                "-codec:v", "libx264",
                "-r", "25",
                "-g", "50",
                "-preset:v", "ultrafast",
                "-b:v", "2M",
                "-codec:a", "aac",
                "-b:a", "64k",
                "-f", "flv"
            ]
        }
    ]
}
-f lavfi -re -i testsrc=size=1280x720:rate=25 -f lavfi anullsrc=r=44100:stereo=25 -codec:v libx264 -r 25 -g 50 -preset:v ultrafast -b:v 2M -codec:a aac -b:a 64k -f flv rtmp://someip/live/stream 
{
    "options": ["-loglevel", "error"],
    "input": [...],
    "output": [...]
}
-loglevel error -f lavfi -re -i testsrc=size=1280x720:rate=25 -f lavfi anullsrc=r=44100:stereo=25 -codec:v libx264 -r 25 -g 50 -preset:v ultrafast -b:v 2M -codec:a aac -b:a 64k -f flv rtmp://someip/live/stream 
{
    "reconnect": true,
    "reconnect_delay_seconds": 10,
    "autostart": true,
    "stale_timeout_seconds": 15,
    "options": [...],
    "input": [...],
    "output": [...]
}
{
    "limits": {
        "cpu_usage": 10,
        "memory_mbytes": 50,
        "waitfor_seconds": 30,
    },
    "options": [...],
    "input": [...],
    "output": [...]
}
{
   "output": [
      {
         "cleanup": [{
            "pattern": "memfs:fo*ar",
            "max_files": "23",
            "max_file_age_seconds": "235",
            "purge_on_delete": true
         }],
      }
   ],
}
diskfs:/{processid}.m3u8
diskfs:/{processid}*.(m3u8|ts)
diskfs:/{reference}/{processid}/{outputid}/*.ts
diskfs:/{reference}*
diskfs:/{reference}**
Process 1
{
    "id": "process_1",
    "input": [
        {             
            "id": "video_in",
            "address": "testsrc=size=1280x720:rate=25",
            "options": ["-f", "lavfi", "-re"]
        }
    ],
    "output": [
        {
            "id": "out",
            "address": "rtmp://someip/live/stream",
            "options": [
                "-codec:v", "libx264",
                "-r", "25",
                "-g", "50",
                "-preset:v", "ultrafast",
                "-b:v", "2M",
                "-f", "flv"
            ]
        }
    ]
}
Process 2
{
    "id": "process_2",
    "input": [
        {             
            "id": "video_in",
            "address": "#process_1:output=out"
        }
    ],
    "output": [
        {
            "id": "out",
            "address": "-",
            "options": [
                "-codec", "copy",
                "-f", "null"
            ]
        }
    ]
}
{
   "id": "b",
   "reference": "a",
   "input: [...],
   "output": [
      {
         "id": "360",
         "address": "{diskfs}/{reference}_{processid}_{outputid}.mp4
      },
      {
         "id": "720",
         "address": "{diskfs}/{reference}_{processid}_{outputid}.mp4
      },
      {
         "id": "1080",
         "address": "{diskfs}/{reference}_{processid}_{outputid}.mp4
      }
   ],
}
curl http://127.0.0.1:8080/api/v3/process \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X POST \
   -d '{
         "id": "test",
         "options": ["-loglevel", "error"],
         "input": [
            {
               "address": "testsrc=size=1280x720:rate=25",
               "id": "0",
               "options": ["-f", "lavfi", "-re"]
            }
         ],
         "output": [
            {
               "address": "-",
               "id": "0",
               "options": ["-c:v", "libx264", "-f", "null"]
            }
         ]
      }'
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

client.v3_process_post(config={
    "id": "test",
    "options": ["-loglevel", "error"],
    "input": [
        {
            "address": "testsrc=size=1280x720:rate=25",
            "id": "input_0",
            "options": ["-f", "lavfi", "-re"],
        }
    ],
    "output": [
        {
            "address": "-",
            "id": "output_0",
            "options": ["-codec:v", "libx264", "-r", "25", "-f", "null"]
        }
    ]
})
import (
    "github.com/datarhei/core-client-go/v16"
    "github.com/datarhei/core-client-go/v16/api"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

process := api.ProcessConfig{
    ID: "test",
    Options: []string{"-loglevel", "error"},
    Input: []api.ProcessConfigIO{
        {
            "ID": "0",
            "Address": "testsrc=size=1280x720:rate=25",
            "Options": []string{"-f", "lavfi", "-re"},
        },
    },
    Output: []api.ProcessConfigIO{
        {
            "ID": "0",
            "Address": "-",
            "Options": []string{"-c:v", "libx264", "-f", "null"},
        },
    },
}

err := client.ProcessAdd(process)
if err != nil {
    ...
}
curl http://127.0.0.1:8080/api/v3/process \
   -H 'accept: application/json' \
   -X GET
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

processes = client.v3_process_get_list(
    id="", idpattern="",
    reference="", refpattern="",
    filter=""
)
for process in processes:
    print(process)
import (
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

processes, err := client.ProcessList(client.ProcessListOptions{})
if err != nil {
    ...
}
curl http://127.0.0.1:8080/api/v3/process/test \
   -H 'accept: application/json' \
   -X GET
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

process_test = client.v3_process_get(
    id="test",
    filter=""
)
print(process_test)
import (
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

process, err := client.Process("test", []string{})
if err != nil {
    ...
}
curl http://127.0.0.1:8080/api/v3/process/test/config \
   -H 'accept: application/json' \
   -X GET
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

process_config_test = client.v3_process_get_config(
    id="test",
)
print(process_config_test)
import (
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

config, err := client.ProcessConfig("test")
if err != nil {
    ...
}
curl http://127.0.0.1:8080/api/v3/process/test \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X PUT \
   -d '{
         "id": "test",
         "options": ["-loglevel", "info"],
         "input": [
            {
               "address": "testsrc=size=1280x720:rate=25",
               "id": "0",
               "options": ["-f", "lavfi", "-re"]
            }
         ],
         "output": [
            {
               "address": "-",
               "id": "0",
               "options": ["-c:v", "libx264", "-f", "null"]
            }
         ]
      }'
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

client.v3_process_put(
    id=test, 
    config={
        "id": "test",
        "options": ["-loglevel", "info"],
        "input": [
            {
                "address": "testsrc=size=1280x720:rate=25",
                "id": "input_0",
                "options": ["-f", "lavfi", "-re"],
            }
        ],
        "output": [
            {
                "address": "-",
                "id": "output_0",
                "options": ["-codec:v", "libx264", "-r", "25", "-f", "null"]
            }
        ]
    }
)
import (
    "github.com/datarhei/core-client-go/v16"
    "github.com/datarhei/core-client-go/v16/api"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

process := api.ProcessConfig{
    ID: "test",
    Options: []string{"-loglevel", "info"},
    Input: []api.ProcessConfigIO{
        {
            "ID": "0",
            "Address": "testsrc=size=1280x720:rate=25",
            "Options": []string{"-f", "lavfi", "-re"},
        },
    },
    Output: []api.ProcessConfigIO{
        {
            "ID": "0",
            "Address": "-",
            "Options": []string{"-c:v", "libx264", "-f", "null"},
        },
    },
}

err := client.ProcessUpdate("test", process)
if err != nil {
    ...
}
curl http://127.0.0.1:8080/api/v3/process/test \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X DELETE
from core_client import Client

client = Client(
    base_url="http://127.0.0.1:8080"
)
client.login()

client.v3_process_delete(
    id="test",
)
import (
    "github.com/datarhei/core-client-go/v16"
)

client, _ := coreclient.New(coreclient.Config{
    Address: "https://127.0.0.1:8080",
})

err := client.ProcessDelete("test")
if err != nil {
    ...
}
command
glob pattern
storage.disk.dir
RTMP configuration
SRT configuration
commands
More details
More details
More details
glob pattern
datarhei | bietet streaming software | PatreonPatreon

Add a new process

post

Add a new FFmpeg process

Authorizations
Body
autostartbooleanOptional
idstringOptional
optionsstring[]Optional
reconnectbooleanOptional
reconnect_delay_secondsintegerOptional
referencestringOptional
stale_timeout_secondsintegerOptional
typestring · enumOptionalPossible values:
Responses
200
OK
application/json
400
Bad Request
application/json
post
POST /v3/process HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 525

{
  "autostart": true,
  "id": "text",
  "input": [
    {
      "address": "text",
      "cleanup": [
        {
          "max_file_age_seconds": 1,
          "max_files": 1,
          "pattern": "text",
          "purge_on_delete": true
        }
      ],
      "id": "text",
      "options": [
        "text"
      ]
    }
  ],
  "limits": {
    "cpu_usage": 1,
    "memory_mbytes": 1,
    "waitfor_seconds": 1
  },
  "options": [
    "text"
  ],
  "output": [
    {
      "address": "text",
      "cleanup": [
        {
          "max_file_age_seconds": 1,
          "max_files": 1,
          "pattern": "text",
          "purge_on_delete": true
        }
      ],
      "id": "text",
      "options": [
        "text"
      ]
    }
  ],
  "reconnect": true,
  "reconnect_delay_seconds": 1,
  "reference": "text",
  "stale_timeout_seconds": 1,
  "type": "ffmpeg"
}
{
  "autostart": true,
  "id": "text",
  "input": [
    {
      "address": "text",
      "cleanup": [
        {
          "max_file_age_seconds": 1,
          "max_files": 1,
          "pattern": "text",
          "purge_on_delete": true
        }
      ],
      "id": "text",
      "options": [
        "text"
      ]
    }
  ],
  "limits": {
    "cpu_usage": 1,
    "memory_mbytes": 1,
    "waitfor_seconds": 1
  },
  "options": [
    "text"
  ],
  "output": [
    {
      "address": "text",
      "cleanup": [
        {
          "max_file_age_seconds": 1,
          "max_files": 1,
          "pattern": "text",
          "purge_on_delete": true
        }
      ],
      "id": "text",
      "options": [
        "text"
      ]
    }
  ],
  "reconnect": true,
  "reconnect_delay_seconds": 1,
  "reference": "text",
  "stale_timeout_seconds": 1,
  "type": "ffmpeg"
}

List all known processes

get

List all known processes. Use the query parameter to filter the listed processes.

Authorizations
Query parameters
filterstringOptional

Comma separated list of fields (config, state, report, metadata) that will be part of the output. If empty, all fields will be part of the output.

referencestringOptional

Return only these process that have this reference value. If empty, the reference will be ignored.

idstringOptional

Comma separated list of process ids to list. Overrides the reference. If empty all IDs will be returned.

idpatternstringOptional

Glob pattern for process IDs. If empty all IDs will be returned. Intersected with results from refpattern.

refpatternstringOptional

Glob pattern for process references. If empty all IDs will be returned. Intersected with results from idpattern.

Responses
200
OK
application/json
get
GET /v3/process HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
200

OK

[
  {
    "config": {
      "autostart": true,
      "id": "text",
      "input": [
        {
          "address": "text",
          "cleanup": [
            {
              "max_file_age_seconds": 1,
              "max_files": 1,
              "pattern": "text",
              "purge_on_delete": true
            }
          ],
          "id": "text",
          "options": [
            "text"
          ]
        }
      ],
      "limits": {
        "cpu_usage": 1,
        "memory_mbytes": 1,
        "waitfor_seconds": 1
      },
      "options": [
        "text"
      ],
      "output": [
        {
          "address": "text",
          "cleanup": [
            {
              "max_file_age_seconds": 1,
              "max_files": 1,
              "pattern": "text",
              "purge_on_delete": true
            }
          ],
          "id": "text",
          "options": [
            "text"
          ]
        }
      ],
      "reconnect": true,
      "reconnect_delay_seconds": 1,
      "reference": "text",
      "stale_timeout_seconds": 1,
      "type": "ffmpeg"
    },
    "created_at": 1,
    "id": "text",
    "metadata": null,
    "reference": "text",
    "report": {
      "created_at": 1,
      "history": [
        {
          "created_at": 1,
          "log": [
            [
              "text"
            ]
          ],
          "prelude": [
            "text"
          ]
        }
      ],
      "log": [
        [
          "text"
        ]
      ],
      "prelude": [
        "text"
      ]
    },
    "state": {
      "command": [
        "text"
      ],
      "cpu_usage": 1,
      "exec": "text",
      "last_logline": "text",
      "memory_bytes": 1,
      "order": "text",
      "progress": {
        "bitrate_kbit": 1,
        "drop": 1,
        "dup": 1,
        "fps": 1,
        "frame": 1,
        "inputs": [
          {
            "address": "text",
            "avstream": {
              "aqueue": 1,
              "drop": 1,
              "dup": 1,
              "duplicating": true,
              "enc": 1,
              "gop": "text",
              "input": {
                "packet": 1,
                "size_kb": 1,
                "state": "running",
                "time": 1
              },
              "looping": true,
              "output": {
                "packet": 1,
                "size_kb": 1,
                "state": "running",
                "time": 1
              },
              "queue": 1
            },
            "bitrate_kbit": 1,
            "channels": 1,
            "codec": "text",
            "coder": "text",
            "format": "text",
            "fps": 1,
            "frame": 1,
            "height": 1,
            "id": "text",
            "index": 1,
            "layout": "text",
            "packet": 1,
            "pix_fmt": "text",
            "pps": 1,
            "q": 1,
            "sampling_hz": 1,
            "size_kb": 1,
            "stream": 1,
            "type": "text",
            "width": 1
          }
        ],
        "outputs": [
          {
            "address": "text",
            "avstream": {
              "aqueue": 1,
              "drop": 1,
              "dup": 1,
              "duplicating": true,
              "enc": 1,
              "gop": "text",
              "input": {
                "packet": 1,
                "size_kb": 1,
                "state": "running",
                "time": 1
              },
              "looping": true,
              "output": {
                "packet": 1,
                "size_kb": 1,
                "state": "running",
                "time": 1
              },
              "queue": 1
            },
            "bitrate_kbit": 1,
            "channels": 1,
            "codec": "text",
            "coder": "text",
            "format": "text",
            "fps": 1,
            "frame": 1,
            "height": 1,
            "id": "text",
            "index": 1,
            "layout": "text",
            "packet": 1,
            "pix_fmt": "text",
            "pps": 1,
            "q": 1,
            "sampling_hz": 1,
            "size_kb": 1,
            "stream": 1,
            "type": "text",
            "width": 1
          }
        ],
        "packet": 1,
        "q": 1,
        "size_kb": 1,
        "speed": 1,
        "time": 1
      },
      "reconnect_seconds": 1,
      "runtime_seconds": 1
    },
    "type": "text"
  }
]

List a process by its ID

get

List a process by its ID. Use the filter parameter to specifiy the level of detail of the output.

Authorizations
Path parameters
idstringRequired

Process ID

Query parameters
filterstringOptional

Comma separated list of fields (config, state, report, metadata) to be part of the output. If empty, all fields will be part of the output

Responses
200
OK
application/json
404
Not Found
application/json
get
GET /v3/process/{id} HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
{
  "config": {
    "autostart": true,
    "id": "text",
    "input": [
      {
        "address": "text",
        "cleanup": [
          {
            "max_file_age_seconds": 1,
            "max_files": 1,
            "pattern": "text",
            "purge_on_delete": true
          }
        ],
        "id": "text",
        "options": [
          "text"
        ]
      }
    ],
    "limits": {
      "cpu_usage": 1,
      "memory_mbytes": 1,
      "waitfor_seconds": 1
    },
    "options": [
      "text"
    ],
    "output": [
      {
        "address": "text",
        "cleanup": [
          {
            "max_file_age_seconds": 1,
            "max_files": 1,
            "pattern": "text",
            "purge_on_delete": true
          }
        ],
        "id": "text",
        "options": [
          "text"
        ]
      }
    ],
    "reconnect": true,
    "reconnect_delay_seconds": 1,
    "reference": "text",
    "stale_timeout_seconds": 1,
    "type": "ffmpeg"
  },
  "created_at": 1,
  "id": "text",
  "metadata": null,
  "reference": "text",
  "report": {
    "created_at": 1,
    "history": [
      {
        "created_at": 1,
        "log": [
          [
            "text"
          ]
        ],
        "prelude": [
          "text"
        ]
      }
    ],
    "log": [
      [
        "text"
      ]
    ],
    "prelude": [
      "text"
    ]
  },
  "state": {
    "command": [
      "text"
    ],
    "cpu_usage": 1,
    "exec": "text",
    "last_logline": "text",
    "memory_bytes": 1,
    "order": "text",
    "progress": {
      "bitrate_kbit": 1,
      "drop": 1,
      "dup": 1,
      "fps": 1,
      "frame": 1,
      "inputs": [
        {
          "address": "text",
          "avstream": {
            "aqueue": 1,
            "drop": 1,
            "dup": 1,
            "duplicating": true,
            "enc": 1,
            "gop": "text",
            "input": {
              "packet": 1,
              "size_kb": 1,
              "state": "running",
              "time": 1
            },
            "looping": true,
            "output": {
              "packet": 1,
              "size_kb": 1,
              "state": "running",
              "time": 1
            },
            "queue": 1
          },
          "bitrate_kbit": 1,
          "channels": 1,
          "codec": "text",
          "coder": "text",
          "format": "text",
          "fps": 1,
          "frame": 1,
          "height": 1,
          "id": "text",
          "index": 1,
          "layout": "text",
          "packet": 1,
          "pix_fmt": "text",
          "pps": 1,
          "q": 1,
          "sampling_hz": 1,
          "size_kb": 1,
          "stream": 1,
          "type": "text",
          "width": 1
        }
      ],
      "outputs": [
        {
          "address": "text",
          "avstream": {
            "aqueue": 1,
            "drop": 1,
            "dup": 1,
            "duplicating": true,
            "enc": 1,
            "gop": "text",
            "input": {
              "packet": 1,
              "size_kb": 1,
              "state": "running",
              "time": 1
            },
            "looping": true,
            "output": {
              "packet": 1,
              "size_kb": 1,
              "state": "running",
              "time": 1
            },
            "queue": 1
          },
          "bitrate_kbit": 1,
          "channels": 1,
          "codec": "text",
          "coder": "text",
          "format": "text",
          "fps": 1,
          "frame": 1,
          "height": 1,
          "id": "text",
          "index": 1,
          "layout": "text",
          "packet": 1,
          "pix_fmt": "text",
          "pps": 1,
          "q": 1,
          "sampling_hz": 1,
          "size_kb": 1,
          "stream": 1,
          "type": "text",
          "width": 1
        }
      ],
      "packet": 1,
      "q": 1,
      "size_kb": 1,
      "speed": 1,
      "time": 1
    },
    "reconnect_seconds": 1,
    "runtime_seconds": 1
  },
  "type": "text"
}

Get the configuration of a process

get

Get the configuration of a process. This is the configuration as provided by Add or Update.

Authorizations
Path parameters
idstringRequired

Process ID

Responses
200
OK
application/json
400
Bad Request
application/json
404
Not Found
application/json
get
GET /v3/process/{id}/config HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
{
  "autostart": true,
  "id": "text",
  "input": [
    {
      "address": "text",
      "cleanup": [
        {
          "max_file_age_seconds": 1,
          "max_files": 1,
          "pattern": "text",
          "purge_on_delete": true
        }
      ],
      "id": "text",
      "options": [
        "text"
      ]
    }
  ],
  "limits": {
    "cpu_usage": 1,
    "memory_mbytes": 1,
    "waitfor_seconds": 1
  },
  "options": [
    "text"
  ],
  "output": [
    {
      "address": "text",
      "cleanup": [
        {
          "max_file_age_seconds": 1,
          "max_files": 1,
          "pattern": "text",
          "purge_on_delete": true
        }
      ],
      "id": "text",
      "options": [
        "text"
      ]
    }
  ],
  "reconnect": true,
  "reconnect_delay_seconds": 1,
  "reference": "text",
  "stale_timeout_seconds": 1,
  "type": "ffmpeg"
}

Replace an existing process

put

Replace an existing process.

Authorizations
Path parameters
idstringRequired

Process ID

Body
autostartbooleanOptional
idstringOptional
optionsstring[]Optional
reconnectbooleanOptional
reconnect_delay_secondsintegerOptional
referencestringOptional
stale_timeout_secondsintegerOptional
typestring · enumOptionalPossible values:
Responses
200
OK
application/json
400
Bad Request
application/json
404
Not Found
application/json
put
PUT /v3/process/{id} HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 525

{
  "autostart": true,
  "id": "text",
  "input": [
    {
      "address": "text",
      "cleanup": [
        {
          "max_file_age_seconds": 1,
          "max_files": 1,
          "pattern": "text",
          "purge_on_delete": true
        }
      ],
      "id": "text",
      "options": [
        "text"
      ]
    }
  ],
  "limits": {
    "cpu_usage": 1,
    "memory_mbytes": 1,
    "waitfor_seconds": 1
  },
  "options": [
    "text"
  ],
  "output": [
    {
      "address": "text",
      "cleanup": [
        {
          "max_file_age_seconds": 1,
          "max_files": 1,
          "pattern": "text",
          "purge_on_delete": true
        }
      ],
      "id": "text",
      "options": [
        "text"
      ]
    }
  ],
  "reconnect": true,
  "reconnect_delay_seconds": 1,
  "reference": "text",
  "stale_timeout_seconds": 1,
  "type": "ffmpeg"
}
{
  "autostart": true,
  "id": "text",
  "input": [
    {
      "address": "text",
      "cleanup": [
        {
          "max_file_age_seconds": 1,
          "max_files": 1,
          "pattern": "text",
          "purge_on_delete": true
        }
      ],
      "id": "text",
      "options": [
        "text"
      ]
    }
  ],
  "limits": {
    "cpu_usage": 1,
    "memory_mbytes": 1,
    "waitfor_seconds": 1
  },
  "options": [
    "text"
  ],
  "output": [
    {
      "address": "text",
      "cleanup": [
        {
          "max_file_age_seconds": 1,
          "max_files": 1,
          "pattern": "text",
          "purge_on_delete": true
        }
      ],
      "id": "text",
      "options": [
        "text"
      ]
    }
  ],
  "reconnect": true,
  "reconnect_delay_seconds": 1,
  "reference": "text",
  "stale_timeout_seconds": 1,
  "type": "ffmpeg"
}

Delete a process by its ID

delete

Delete a process by its ID

Authorizations
Path parameters
idstringRequired

Process ID

Responses
200
OK
application/json
Responsestring
404
Not Found
application/json
delete
DELETE /v3/process/{id} HTTP/1.1
Host: api
Authorization: YOUR_API_KEY
Accept: */*
text
Logo

Beginner

Quick introduction to using the core API and FFmpeg processes.

Guide content

Finally, two FFmpeg processes are running, using the RTMP server and the in-memory filesystem.
  1. Starting the Core container

  2. Configure and restart the Core

  3. Creating, verifying, updating, and deleting an FFmpeg process

  4. Using placeholders for the in-memory file system and RTMP server

  5. Analyze FFmpeg process errors

1. Start the Core

docker run --detach --name core --publish 8080:8080 datarhei/core:latest

2. Enable the RTMP server on the Core

  1. Configuring the Core via the API

  2. Restarting the Core via the API and loading the changed configuration

  3. Calling the RTMP API

2.1 Changing the Configuration

curl http://127.0.0.1:8080/api/v3/config \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X PUT \
   -d '{
         "version": 3,
         "rtmp": {
            "enable": true
         }
      }'
"OK"

API Documentation > Configuration

2.2 Apply the change

curl http://127.0.0.1:8080/api/v3/config/reload \
    -X GET
"OK"

2.3 Checking the RTMP server

curl http://127.0.0.1:8080/api/v3/rtmp \
    -X GET
[]

3. Main-process

Fig. 1: Create the main-process
  1. Initiating the main-process using the Process API

  2. Check the main-process via the Process API

  3. Update the main-process via the Process API

  4. Check the change of the main-process via the Process API

  5. Monitor the RTMP stream via the RTMP API

3.1 Create the FFmpeg process

curl http://127.0.0.1:8080/api/v3/process \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X POST \
   -d '{
         "id": "main",
         "options": [],
         "input": [
            {
               "address": "testsrc=size=1280x720:rate=25",
               "id": "0",
               "options": ["-f", "lavfi", "-re"]
            }
         ],
         "output": [
            {
               "address": "-",
               "id": "0",
               "options": ["-c:v", "libx264", "-f", "null"]
            }
         ],
         "autostart": true,
         "reconnect": true,
         "reconnect_delay_seconds": 10,
         "stale_timeout_seconds": 10
      }'

The configuration creates a process with a virtual video input with no output.

{
    "id": "main",
    "type": "ffmpeg",
    "reference": "",
    "input": [{
        "id": "0",
        "address": "testsrc=size=1280x720:rate=25",
        "options": ["-f", "lavfi", "-re"]
    }],
    "output": [{
        "id": "0",
        "address": "-",
        "options": ["-c:v", "libx264", "-f", "null"]
    }],
    "options": [],
    "reconnect": true,
    "reconnect_delay_seconds": 10,
    "autostart": true,
    "stale_timeout_seconds": 10,
    "limits": {
        "cpu_usage": 0,
        "memory_mbytes": 0,
        "waitfor_seconds": 0
    }
}

API Documentation > FFmpeg processing

3.2 Check that the FFmpeg process is running

curl http://127.0.0.1:8080/api/v3/process/main/state \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X GET
{
    "order": "start",
    "exec": "running",
    "runtime_seconds": 36,
    "reconnect_seconds": -1,
    "last_logline": "ffmpeg.mapping:{\"graphs\":[{\"index\":0,\"graph\":[{\"src_name\":\"Parsed_null_0\",\"src_filter\":\"null\",\"dst_name\":\"auto_scale_0\",\"dst_filter\":\"scale\",\"inpad\":\"default\",\"outpad\":\"default\",\"timebase\": \"1/25\",\"type\":\"video\",\"format\":\"rgb24\",\"width\":1280,\"height\":720},{\"src_name\":\"graph 0 input from stream 0:0\",\"src_filter\":\"buffer\",\"dst_name\":\"Parsed_null_0\",\"dst_filter\":\"null\",\"inpad\":\"default\",\"outpad\":\"default\",\"timebase\": \"1/25\",\"type\":\"video\",\"format\":\"rgb24\",\"width\":1280,\"height\":720},{\"src_name\":\"format\",\"src_filter\":\"format\",\"dst_name\":\"out_0_0\",\"dst_filter\":\"buffersink\",\"inpad\":\"default\",\"outpad\":\"default\",\"timebase\": \"1/25\",\"type\":\"video\",\"format\":\"yuv444p\",\"width\":1280,\"height\":720},{\"src_name\":\"auto_scale_0\",\"src_filter\":\"scale\",\"dst_name\":\"format\",\"dst_filter\":\"format\",\"inpad\":\"default\",\"outpad\":\"default\",\"timebase\": \"1/25\",\"type\":\"video\",\"format\":\"yuv444p\",\"width\":1280,\"height\":720}]}],\"mapping\":[{\"input\":{\"index\":0,\"stream\":0},\"graph\":{\"index\":0,\"name\":\"graph 0 input from stream 0:0\"},\"output\":null},{\"input\":null,\"graph\":{\"index\":0,\"name\":\"out_0_0\"},\"output\":{\"index\":0,\"stream\":0}}]}",
    "progress": {
        "inputs": [{
            "id": "0",
            "address": "testsrc=size=1280x720:rate=25",
            "index": 0,
            "stream": 0,
            "format": "lavfi",
            "type": "video",
            "codec": "rawvideo",
            "coder": "rawvideo",
            "frame": 910,
            "fps": 25.267,
            "packet": 910,
            "pps": 25.267,
            "size_kb": 2457000,
            "bitrate_kbit": 545760.000,
            "pix_fmt": "rgb24",
            "q": 0.000,
            "width": 1280,
            "height": 720,
            "avstream": null
        }],
        "outputs": [{
            "id": "0",
            "address": "pipe:",
            "index": 0,
            "stream": 0,
            "format": "null",
            "type": "video",
            "codec": "h264",
            "coder": "libx264",
            "frame": 910,
            "fps": 25.267,
            "packet": 860,
            "pps": 25.267,
            "size_kb": 332,
            "bitrate_kbit": 76.800,
            "pix_fmt": "yuv444p",
            "q": 28.000,
            "width": 1280,
            "height": 720,
            "avstream": null
        }],
        "frame": 910,
        "packet": 860,
        "fps": 25.267,
        "q": 28,
        "size_kb": 332,
        "time": 34.320,
        "bitrate_kbit": 76.800,
        "speed": 0.943,
        "drop": 0,
        "dup": 0
    },
    "memory_bytes": 427692032,
    "cpu_usage": 22.015,
    "command": ["-f", "lavfi", "-re", "-i", "testsrc=size=1280x720:rate=25", "-c:v", "libx264", "-f", "null", "-"]
}

The process is running if the exec state in the response is running and the progress.packet is increasing.

3.3 Update the FFmpeg process

curl http://127.0.0.1:8080/api/v3/process/main \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X PUT \
   -d '{
         "id": "main",
         "options": [],
         "input": [
            {
               "address": "testsrc=size=1280x720:rate=25",
               "id": "0",
               "options": ["-f", "lavfi", "-re"]
            }
         ],
         "output": [
            {
               "address": "{rtmp,name=main}",
               "id": "0",
               "options": ["-c:v", "libx264", "-f", "flv"]
            }
         ],
         "autostart": true,
         "reconnect": true,
         "reconnect_delay_seconds": 10,
         "stale_timeout_seconds": 10
      }'

This configuration creates an RTMP stream as output and sends it to the internal RTMP server. It uses the {rtmp} placeholder for ease of integration.

{
    "id": "main",
    "type": "ffmpeg",
    "reference": "",
    "input": [{
        "id": "0",
        "address": "testsrc=size=1280x720:rate=25",
        "options": ["-f", "lavfi", "-re"]
    }],
    "output": [{
        "id": "0",
        "address": "{rtmp,name=main}",
        "options": ["-c:v", "libx264", "-f", "flv"]
    }],
    "options": [],
    "reconnect": true,
    "reconnect_delay_seconds": 10,
    "autostart": true,
    "stale_timeout_seconds": 10,
    "limits": {
        "cpu_usage": 0,
        "memory_mbytes": 0,
        "waitfor_seconds": 0
    }
}

3.4 Check that the FFmpeg process is running again

curl http://127.0.0.1:8080/api/v3/process/main/state \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X GET
{
    "order": "start",
    "exec": "running",
    "runtime_seconds": 32,
    "reconnect_seconds": -1,
    "last_logline": "ffmpeg.mapping:{\"graphs\":[{\"index\":0,\"graph\":[{\"src_name\":\"Parsed_null_0\",\"src_filter\":\"null\",\"dst_name\":\"auto_scale_0\",\"dst_filter\":\"scale\",\"inpad\":\"default\",\"outpad\":\"default\",\"timebase\": \"1/25\",\"type\":\"video\",\"format\":\"rgb24\",\"width\":1280,\"height\":720},{\"src_name\":\"graph 0 input from stream 0:0\",\"src_filter\":\"buffer\",\"dst_name\":\"Parsed_null_0\",\"dst_filter\":\"null\",\"inpad\":\"default\",\"outpad\":\"default\",\"timebase\": \"1/25\",\"type\":\"video\",\"format\":\"rgb24\",\"width\":1280,\"height\":720},{\"src_name\":\"format\",\"src_filter\":\"format\",\"dst_name\":\"out_0_0\",\"dst_filter\":\"buffersink\",\"inpad\":\"default\",\"outpad\":\"default\",\"timebase\": \"1/25\",\"type\":\"video\",\"format\":\"yuv444p\",\"width\":1280,\"height\":720},{\"src_name\":\"auto_scale_0\",\"src_filter\":\"scale\",\"dst_name\":\"format\",\"dst_filter\":\"format\",\"inpad\":\"default\",\"outpad\":\"default\",\"timebase\": \"1/25\",\"type\":\"video\",\"format\":\"yuv444p\",\"width\":1280,\"height\":720}]}],\"mapping\":[{\"input\":{\"index\":0,\"stream\":0},\"graph\":{\"index\":0,\"name\":\"graph 0 input from stream 0:0\"},\"output\":null},{\"input\":null,\"graph\":{\"index\":0,\"name\":\"out_0_0\"},\"output\":{\"index\":0,\"stream\":0}}]}",
    "progress": {
        "inputs": [{
            "id": "0",
            "address": "testsrc=size=1280x720:rate=25",
            "index": 0,
            "stream": 0,
            "format": "lavfi",
            "type": "video",
            "codec": "rawvideo",
            "coder": "rawvideo",
            "frame": 796,
            "fps": 24.833,
            "packet": 796,
            "pps": 24.833,
            "size_kb": 2149200,
            "bitrate_kbit": 536400.000,
            "pix_fmt": "rgb24",
            "q": 0.000,
            "width": 1280,
            "height": 720,
            "avstream": null
        }],
        "outputs": [{
            "id": "0",
            "address": "rtmp://localhost:1935/main",
            "index": 0,
            "stream": 0,
            "format": "flv",
            "type": "video",
            "codec": "h264",
            "coder": "libx264",
            "frame": 796,
            "fps": 24.833,
            "packet": 746,
            "pps": 24.833,
            "size_kb": 284,
            "bitrate_kbit": 73.600,
            "pix_fmt": "yuv444p",
            "q": 25.000,
            "width": 1280,
            "height": 720,
            "avstream": null
        }],
        "frame": 796,
        "packet": 746,
        "fps": 24.833,
        "q": 25,
        "size_kb": 284,
        "time": 29.720,
        "bitrate_kbit": 73.600,
        "speed": 0.934,
        "drop": 0,
        "dup": 0
    },
    "memory_bytes": 428478464,
    "cpu_usage": 18.249,
    "command": ["-f", "lavfi", "-re", "-i", "testsrc=size=1280x720:rate=25", "-c:v", "libx264", "-f", "flv", "rtmp://localhost:1935/main"]
}

3.5 Check that the RTMP stream is available

curl http://127.0.0.1:8080/api/v3/rtmp \
    -X GET
[{
    "name": "/main"
}]

Now you can pull the stream from the RTMP server with the URL rtmp://127.0.0.1:1935/main.

API Documentation > RTMP

4. Sub-process

Fig. 2: Create the sub-process
  1. Initiate the sub-process through the Process API

  2. Check the sub-process through the Process API

  3. Monitor the HLS stream via the file system API

4.1 Create the FFmpeg process

curl http://127.0.0.1:8080/api/v3/process \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X POST \
   -d '{
         "id": "sub",
         "options": [],
         "input": [
            {
               "address": "{rtmp,name=main}",
               "id": "0",
               "options": []
            }
         ],
         "output": [
            {
               "address": "{memfs}/{processid}.m3u8",
               "id": "0",
               "options": ["-f", "hls"]
            }
         ],
         "autostart": true,
         "reconnect": true,
         "reconnect_delay_seconds": 10,
         "stale_timeout_seconds": 10
      }'

This configuration uses the RTMP stream from the main-process and creates an HLS stream as an output on the internal in-memory file system. It uses the {memfs} placeholder for ease of integration.

{
    "id": "sub",
    "type": "ffmpeg",
    "reference": "",
    "input": [{
        "id": "0",
        "address": "{rtmp,name=main}",
        "options": []
    }],
    "output": [{
        "id": "0",
        "address": "{memfs}/{processid}.m3u8",
        "options": ["-f", "hls"]
    }],
    "options": [],
    "reconnect": true,
    "reconnect_delay_seconds": 10,
    "autostart": true,
    "stale_timeout_seconds": 10,
    "limits": {
        "cpu_usage": 0,
        "memory_mbytes": 0,
        "waitfor_seconds": 0
    }
}

4.2 Check that the FFmpeg process is running

curl http://127.0.0.1:8080/api/v3/process/sub/state \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X GET
{
    "order": "start",
    "exec": "running",
    "runtime_seconds": 31,
    "reconnect_seconds": -1,
    "last_logline": "[hls @ 0x7fbf1e8fd340] Opening 'http://admin:irIVvk4miq3JhKGZnY@localhost:8080/memfs/sub_22.ts' for writing",
    "progress": {
        "inputs": [{
            "id": "0",
            "address": "rtmp://localhost:1935/main",
            "index": 0,
            "stream": 0,
            "format": "flv",
            "type": "video",
            "codec": "h264",
            "coder": "h264",
            "frame": 828,
            "fps": 29.704,
            "packet": 834,
            "pps": 29.926,
            "size_kb": 322,
            "bitrate_kbit": 93.037,
            "pix_fmt": "yuv444p",
            "q": 0.000,
            "width": 1280,
            "height": 720,
            "avstream": null
        }],
        "outputs": [{
            "id": "0",
            "address": "http://admin:irIVvk4miq3JhKGZnY@localhost:8080/memfs/sub.m3u8",
            "index": 0,
            "stream": 0,
            "format": "hls",
            "type": "video",
            "codec": "h264",
            "coder": "libx264",
            "frame": 828,
            "fps": 29.704,
            "packet": 778,
            "pps": 27.852,
            "size_kb": 322,
            "bitrate_kbit": 92.741,
            "pix_fmt": "yuv444p",
            "q": 28.000,
            "width": 1280,
            "height": 720,
            "avstream": null
        }],
        "frame": 828,
        "packet": 778,
        "fps": 29.704,
        "q": 28,
        "size_kb": 322,
        "time": 31.400,
        "bitrate_kbit": 92.741,
        "speed": 1.100,
        "drop": 0,
        "dup": 0
    },
    "memory_bytes": 467107840,
    "cpu_usage": 18.008,
    "command": ["-i", "rtmp://localhost:1935/main", "-f", "hls", "http://admin:irIVvk4miq3JhKGZnY@localhost:8080/memfs/sub.m3u8"]
}

4.3 Check that the HLS stream is available

curl http://127.0.0.1:8080/api/v3/fs/mem \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X GET
[{
    "name": "/sub0.ts",
    "size_bytes": 143444,
    "last_modified": 1674208099
}, {
    "name": "/sub1.ts",
    "size_bytes": 139684,
    "last_modified": 1674208109
}, {
    "name": "/sub2.ts",
    "size_bytes": 144008,
    "last_modified": 1674208119
}, {
    "name": "/sub3.ts",
    "size_bytes": 143068,
    "last_modified": 1674208129
}, {
    "name": "/sub4.ts",
    "size_bytes": 141376,
    "last_modified": 1674208139
}, {
    "name": "/sub5.ts",
    "size_bytes": 142504,
    "last_modified": 1674208149
}, {
    "name": "/sub6.ts",
    "size_bytes": 143068,
    "last_modified": 1674208159
}, {
    "name": "/sub7.ts",
    "size_bytes": 141376,
    "last_modified": 1674208169
}, {
    "name": "/sub8.ts",
    "size_bytes": 142504,
    "last_modified": 1674208179
}, {
    "name": "/sub9.ts",
    "size_bytes": 143256,
    "last_modified": 1674208189
}, {
    "name": "/sub.m3u8",
    "size_bytes": 209,
    "last_modified": 1674208189
}]json

Now you can pull the stream from the in-memory filesystem with the URL http://127.0.0.1:8080/memfs/sub.m3u8.

API Documentation > Filesystem > In-Memory

5. Analyze process failers

  1. Stop the main-process through the Process API

  2. Check the sub-process via process API

  3. Analyze the sub-process log report via the Process API

  4. Start the main-process through the Process API

  5. Check the sub-process through the Process API

5.1 Create an error on the video input of the sub-process

curl http://127.0.0.1:8080/api/v3/process/main/command \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X PUT \
   -d '{"command": "stop"}'

This command stops the running main-process and interrupts the RTMP stream.

"OK"

5.2 Check the status of the sub-process

curl http://127.0.0.1:8080/api/v3/process/sub/state \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X GET
{
    "order": "start",
    "exec": "failed",
    "runtime_seconds": 6,
    "reconnect_seconds": 3,
    "last_logline": "rtmp://localhost:1935/main: Broken pipe",
    "progress": {
        "inputs": [],
        "outputs": [],
        "frame": 0,
        "packet": 0,
        "fps": 0,
        "q": 0,
        "size_kb": 0,
        "time": 0,
        "bitrate_kbit": 0,
        "speed": 0,
        "drop": 0,
        "dup": 0
    },
    "memory_bytes": 0,
    "cpu_usage": 0,
    "command": ["-i", "rtmp://localhost:1935/main", "-f", "hls", "http://admin:irIVvk4miq3JhKGZnY@localhost:8080/memfs/sub.m3u8"]
}

If a process has the order=start, does not return exec=running, and progress.packet is not ascending or >0, it indicates an error.

5.3 Analyze the sub-process report

curl http://127.0.0.1:8080/api/v3/process/sub/report \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X GE
{
    "created_at": 1674134954,
    "prelude": ["ffmpeg version 5.1.2-datarhei Copyright (c) 2000-2022 the FFmpeg developers", "  built with gcc 11.2.1 (Alpine 11.2.1_git20220219) 20220219", "  configuration: --extra-version=datarhei --prefix=/usr --extra-libs='-lpthread -lxml2 -lm -lz -lsupc++ -lstdc++ -lssl -lcrypto -lz -lc -ldl' --enable-nonfree --enable-gpl --enable-version3 --enable-postproc --enable-static --enable-openssl --enable-libxml2 --enable-libv4l2 --enable-v4l2_m2m --enable-libfreetype --enable-libsrt --enable-libx264 --enable-libx265 --enable-libvpx --enable-libmp3lame --enable-libopus --enable-libvorbis --disable-ffplay --disable-debug --disable-doc --disable-shared", "  libavutil      57. 28.100 / 57. 28.100", "  libavcodec     59. 37.100 / 59. 37.100", "  libavformat    59. 27.100 / 59. 27.100", "  libavdevice    59.  7.100 / 59.  7.100", "  libavfilter     8. 44.100 /  8. 44.100", "  libswscale      6.  7.100 /  6.  7.100", "  libswresample   4.  7.100 /  4.  7.100", "  libpostproc    56.  6.100 / 56.  6.100", "rtmp://localhost:1935/main: Broken pipe"],
    "log": [
        ["1674134954", "ffmpeg version 5.1.2-datarhei Copyright (c) 2000-2022 the FFmpeg developers"],
        ["1674134954", "  built with gcc 11.2.1 (Alpine 11.2.1_git20220219) 20220219"],
        ["1674134954", "  configuration: --extra-version=datarhei --prefix=/usr --extra-libs='-lpthread -lxml2 -lm -lz -lsupc++ -lstdc++ -lssl -lcrypto -lz -lc -ldl' --enable-nonfree --enable-gpl --enable-version3 --enable-postproc --enable-static --enable-openssl --enable-libxml2 --enable-libv4l2 --enable-v4l2_m2m --enable-libfreetype --enable-libsrt --enable-libx264 --enable-libx265 --enable-libvpx --enable-libmp3lame --enable-libopus --enable-libvorbis --disable-ffplay --disable-debug --disable-doc --disable-shared"],
        ["1674134954", "  libavutil      57. 28.100 / 57. 28.100"],
        ["1674134954", "  libavcodec     59. 37.100 / 59. 37.100"],
        ["1674134954", "  libavformat    59. 27.100 / 59. 27.100"],
        ["1674134954", "  libavdevice    59.  7.100 / 59.  7.100"],
        ["1674134954", "  libavfilter     8. 44.100 /  8. 44.100"],
        ["1674134954", "  libswscale      6.  7.100 /  6.  7.100"],
        ["1674134954", "  libswresample   4.  7.100 /  4.  7.100"],
        ["1674134954", "  libpostproc    56.  6.100 / 56.  6.100"],
        ["1674134954", "rtmp://localhost:1935/main: Broken pipe"]
    ],
    "history": [{
        "created_at": 1674134924,
        "prelude": ["ffmpeg version 5.1.2-datarhei Copyright (c) 2000-2022 the FFmpeg developers", "  built with gcc 11.2.1 (Alpine 11.2.1_git20220219) 20220219", "  configuration: --extra-version=datarhei --prefix=/usr --extra-libs='-lpthread -lxml2 -lm -lz -lsupc++ -lstdc++ -lssl -lcrypto -lz -lc -ldl' --enable-nonfree --enable-gpl --enable-version3 --enable-postproc --enable-static --enable-openssl --enable-libxml2 --enable-libv4l2 --enable-v4l2_m2m --enable-libfreetype --enable-libsrt --enable-libx264 --enable-libx265 --enable-libvpx --enable-libmp3lame --enable-libopus --enable-libvorbis --disable-ffplay --disable-debug --disable-doc --disable-shared", "  libavutil      57. 28.100 / 57. 28.100", "  libavcodec     59. 37.100 / 59. 37.100", "  libavformat    59. 27.100 / 59. 27.100", "  libavdevice    59.  7.100 / 59.  7.100", "  libavfilter     8. 44.100 /  8. 44.100", "  libswscale      6.  7.100 /  6.  7.100", "  libswresample   4.  7.100 /  4.  7.100", "  libpostproc    56.  6.100 / 56.  6.100", "rtmp://localhost:1935/main: I/O error"],
        "log": [
            ["1674134924", "ffmpeg version 5.1.2-datarhei Copyright (c) 2000-2022 the FFmpeg developers"],
            ["1674134924", "  built with gcc 11.2.1 (Alpine 11.2.1_git20220219) 20220219"],
            ["1674134924", "  configuration: --extra-version=datarhei --prefix=/usr --extra-libs='-lpthread -lxml2 -lm -lz -lsupc++ -lstdc++ -lssl -lcrypto -lz -lc -ldl' --enable-nonfree --enable-gpl --enable-version3 --enable-postproc --enable-static --enable-openssl --enable-libxml2 --enable-libv4l2 --enable-v4l2_m2m --enable-libfreetype --enable-libsrt --enable-libx264 --enable-libx265 --enable-libvpx --enable-libmp3lame --enable-libopus --enable-libvorbis --disable-ffplay --disable-debug --disable-doc --disable-shared"],
            ["1674134924", "  libavutil      57. 28.100 / 57. 28.100"],
            ["1674134924", "  libavcodec     59. 37.100 / 59. 37.100"],
            ["1674134924", "  libavformat    59. 27.100 / 59. 27.100"],
            ["1674134924", "  libavdevice    59.  7.100 / 59.  7.100"],
            ["1674134924", "  libavfilter     8. 44.100 /  8. 44.100"],
            ["1674134924", "  libswscale      6.  7.100 /  6.  7.100"],
            ["1674134924", "  libswresample   4.  7.100 /  4.  7.100"],
            ["1674134924", "  libpostproc    56.  6.100 / 56.  6.100"],
            ["1674134924", "rtmp://localhost:1935/main: I/O error"]
        ]
    }, {
        "created_at": 1674134934,
        "prelude": ["ffmpeg version 5.1.2-datarhei Copyright (c) 2000-2022 the FFmpeg developers", "  built with gcc 11.2.1 (Alpine 11.2.1_git20220219) 20220219", "  configuration: --extra-version=datarhei --prefix=/usr --extra-libs='-lpthread -lxml2 -lm -lz -lsupc++ -lstdc++ -lssl -lcrypto -lz -lc -ldl' --enable-nonfree --enable-gpl --enable-version3 --enable-postproc --enable-static --enable-openssl --enable-libxml2 --enable-libv4l2 --enable-v4l2_m2m --enable-libfreetype --enable-libsrt --enable-libx264 --enable-libx265 --enable-libvpx --enable-libmp3lame --enable-libopus --enable-libvorbis --disable-ffplay --disable-debug --disable-doc --disable-shared", "  libavutil      57. 28.100 / 57. 28.100", "  libavcodec     59. 37.100 / 59. 37.100", "  libavformat    59. 27.100 / 59. 27.100", "  libavdevice    59.  7.100 / 59.  7.100", "  libavfilter     8. 44.100 /  8. 44.100", "  libswscale      6.  7.100 /  6.  7.100", "  libswresample   4.  7.100 /  4.  7.100", "  libpostproc    56.  6.100 / 56.  6.100", "rtmp://localhost:1935/main: Broken pipe"],
        "log": [
            ["1674134934", "ffmpeg version 5.1.2-datarhei Copyright (c) 2000-2022 the FFmpeg developers"],
            ["1674134934", "  built with gcc 11.2.1 (Alpine 11.2.1_git20220219) 20220219"],
            ["1674134934", "  configuration: --extra-version=datarhei --prefix=/usr --extra-libs='-lpthread -lxml2 -lm -lz -lsupc++ -lstdc++ -lssl -lcrypto -lz -lc -ldl' --enable-nonfree --enable-gpl --enable-version3 --enable-postproc --enable-static --enable-openssl --enable-libxml2 --enable-libv4l2 --enable-v4l2_m2m --enable-libfreetype --enable-libsrt --enable-libx264 --enable-libx265 --enable-libvpx --enable-libmp3lame --enable-libopus --enable-libvorbis --disable-ffplay --disable-debug --disable-doc --disable-shared"],
            ["1674134934", "  libavutil      57. 28.100 / 57. 28.100"],
            ["1674134934", "  libavcodec     59. 37.100 / 59. 37.100"],
            ["1674134934", "  libavformat    59. 27.100 / 59. 27.100"],
            ["1674134934", "  libavdevice    59.  7.100 / 59.  7.100"],
            ["1674134934", "  libavfilter     8. 44.100 /  8. 44.100"],
            ["1674134934", "  libswscale      6.  7.100 /  6.  7.100"],
            ["1674134934", "  libswresample   4.  7.100 /  4.  7.100"],
            ["1674134934", "  libpostproc    56.  6.100 / 56.  6.100"],
            ["1674134934", "rtmp://localhost:1935/main: Broken pipe"]
        ]
    }, {
        "created_at": 1674134944,
        "prelude": ["ffmpeg version 5.1.2-datarhei Copyright (c) 2000-2022 the FFmpeg developers", "  built with gcc 11.2.1 (Alpine 11.2.1_git20220219) 20220219", "  configuration: --extra-version=datarhei --prefix=/usr --extra-libs='-lpthread -lxml2 -lm -lz -lsupc++ -lstdc++ -lssl -lcrypto -lz -lc -ldl' --enable-nonfree --enable-gpl --enable-version3 --enable-postproc --enable-static --enable-openssl --enable-libxml2 --enable-libv4l2 --enable-v4l2_m2m --enable-libfreetype --enable-libsrt --enable-libx264 --enable-libx265 --enable-libvpx --enable-libmp3lame --enable-libopus --enable-libvorbis --disable-ffplay --disable-debug --disable-doc --disable-shared", "  libavutil      57. 28.100 / 57. 28.100", "  libavcodec     59. 37.100 / 59. 37.100", "  libavformat    59. 27.100 / 59. 27.100", "  libavdevice    59.  7.100 / 59.  7.100", "  libavfilter     8. 44.100 /  8. 44.100", "  libswscale      6.  7.100 /  6.  7.100", "  libswresample   4.  7.100 /  4.  7.100", "  libpostproc    56.  6.100 / 56.  6.100", "rtmp://localhost:1935/main: I/O error"],
        "log": [
            ["1674134944", "ffmpeg version 5.1.2-datarhei Copyright (c) 2000-2022 the FFmpeg developers"],
            ["1674134944", "  built with gcc 11.2.1 (Alpine 11.2.1_git20220219) 20220219"],
            ["1674134944", "  configuration: --extra-version=datarhei --prefix=/usr --extra-libs='-lpthread -lxml2 -lm -lz -lsupc++ -lstdc++ -lssl -lcrypto -lz -lc -ldl' --enable-nonfree --enable-gpl --enable-version3 --enable-postproc --enable-static --enable-openssl --enable-libxml2 --enable-libv4l2 --enable-v4l2_m2m --enable-libfreetype --enable-libsrt --enable-libx264 --enable-libx265 --enable-libvpx --enable-libmp3lame --enable-libopus --enable-libvorbis --disable-ffplay --disable-debug --disable-doc --disable-shared"],
            ["1674134944", "  libavutil      57. 28.100 / 57. 28.100"],
            ["1674134944", "  libavcodec     59. 37.100 / 59. 37.100"],
            ["1674134944", "  libavformat    59. 27.100 / 59. 27.100"],
            ["1674134944", "  libavdevice    59.  7.100 / 59.  7.100"],
            ["1674134944", "  libavfilter     8. 44.100 /  8. 44.100"],
            ["1674134944", "  libswscale      6.  7.100 /  6.  7.100"],
            ["1674134944", "  libswresample   4.  7.100 /  4.  7.100"],
            ["1674134944", "  libpostproc    56.  6.100 / 56.  6.100"],
            ["1674134944", "rtmp://localhost:1935/main: I/O error"]
        ]
    }]
}

The current and the last three process logs are available in the report.

The log entries in the response

  • rtmp://localhost:1935/main: Broken pipe

  • rtmp://localhost:1935/main I/O error

indicate that the video source is unreachable, so the process fails.

5.4 Fix the process failer

curl http://127.0.0.1:8080/api/v3/process/main/command \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X PUT \
   -d '{"command":"start"}'

This command enables the main-process again.

"OK"

5.5 Verify the fix

curl http://127.0.0.1:8080/api/v3/process/sub/state \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X GET
{
    "order": "start",
    "exec": "running",
    "runtime_seconds": 1,
    "reconnect_seconds": -1,
    "last_logline": "ffmpeg.mapping:{\"graphs\":[{\"index\":0,\"graph\":[{\"src_name\":\"Parsed_null_0\",\"src_filter\":\"null\",\"dst_name\":\"format\",\"dst_filter\":\"format\",\"inpad\":\"default\",\"outpad\":\"default\",\"timebase\": \"1/1000\",\"type\":\"video\",\"format\":\"yuv444p\",\"width\":1280,\"height\":720},{\"src_name\":\"graph 0 input from stream 0:0\",\"src_filter\":\"buffer\",\"dst_name\":\"Parsed_null_0\",\"dst_filter\":\"null\",\"inpad\":\"default\",\"outpad\":\"default\",\"timebase\": \"1/1000\",\"type\":\"video\",\"format\":\"yuv444p\",\"width\":1280,\"height\":720},{\"src_name\":\"format\",\"src_filter\":\"format\",\"dst_name\":\"out_0_0\",\"dst_filter\":\"buffersink\",\"inpad\":\"default\",\"outpad\":\"default\",\"timebase\": \"1/1000\",\"type\":\"video\",\"format\":\"yuv444p\",\"width\":1280,\"height\":720}]}],\"mapping\":[{\"input\":{\"index\":0,\"stream\":0},\"graph\":{\"index\":0,\"name\":\"graph 0 input from stream 0:0\"},\"output\":null},{\"input\":null,\"graph\":{\"index\":0,\"name\":\"out_0_0\"},\"output\":{\"index\":0,\"stream\":0}}]}",
    "progress": {
        "inputs": [{
            "id": "0",
            "address": "rtmp://localhost:1935/main",
            "index": 0,
            "stream": 0,
            "format": "flv",
            "type": "video",
            "codec": "h264",
            "coder": "h264",
            "frame": 116,
            "fps": 0.000,
            "packet": 122,
            "pps": 0.000,
            "size_kb": 51,
            "bitrate_kbit": 0.000,
            "pix_fmt": "yuv444p",
            "q": 0.000,
            "width": 1280,
            "height": 720,
            "avstream": null
        }],
        "outputs": [{
            "id": "0",
            "address": "http://admin:irIVvk4miq3JhKGZnY@localhost:8080/memfs/sub.m3u8",
            "index": 0,
            "stream": 0,
            "format": "hls",
            "type": "video",
            "codec": "h264",
            "coder": "libx264",
            "frame": 116,
            "fps": 0.000,
            "packet": 66,
            "pps": 0.000,
            "size_kb": 34,
            "bitrate_kbit": 0.000,
            "pix_fmt": "yuv444p",
            "q": 28.000,
            "width": 1280,
            "height": 720,
            "avstream": null
        }],
        "frame": 116,
        "packet": 66,
        "fps": 0,
        "q": 28,
        "size_kb": 34,
        "time": 2.560,
        "bitrate_kbit": 0,
        "speed": 2.520,
        "drop": 0,
        "dup": 0
    },
    "memory_bytes": 463351808,
    "cpu_usage": 0,
    "command": ["-i", "rtmp://localhost:1935/main", "-f", "hls", "http://admin:irIVvk4miq3JhKGZnY@localhost:8080/memfs/sub.m3u8"]
}

The exec status in the response is running again.

6. Cleanup

  1. Stops and deletes the sub-process via the Process API

  2. Stops and deletes the main-process via the Process API

6.1 Delete the sub-process

curl http://127.0.0.1:8080/api/v3/process/sub \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X DELETE
"OK"

6.2 Delete the main-process

curl http://127.0.0.1:8080/api/v3/process/main \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X DELETE
"OK"

6.3 Stop the Core

docker kill core
docker rm core
Config
Process
RTMP
In-memory
BIFRÖST Part 2 - RPi 4B Hybrid Bridging Router - Tuning Linux Network for Gigabit Line SpeedLinkedInEditors
Logo