Process
Manage FFmpeg processes
The /api/v3/process
family of API call will let you manage and monitor FFmpeg processes by the datarhei Core.
An FFmpeg process definition, as required by the API, consists of its inputs and its outputs and global options. That's a very minimalistic abstraction of the FFmpeg command line and assumes that you know the command line options in order to achieve what you want.
The most minimal process definition is:
This will be translated to the FFmpeg command line:
Let's use this as a starting point for a more practical example. We want to generate a test video with silence audio, encode it to H.264 and AAC and finally send it via RTMP to some destination.
Identification
You can give each process an ID with the field id
. There are no restrictions regarding the format or allowed characters. If you don't provide an ID, one will be generated for you. The ID is used to identify the process later in API calls for querying and modifying the process after it has been created. An ID has to be unique for each datarhei Core.
Additionally to the ID you can provide a reference with the field reference
. This allows you provide any information with the process. It will not be interpreted at all. You can use it for e.g. grouping different processes together.
Inputs
First, we define the inputs:
This will be translated to the FFmpeg command line:
The id
for each input is optional and will be used for later reference or for replacing placeholders (more about placeholders later). If you don't provide an ID for an input, it will be generated for you.
Outputs
Next, we define the output. Because both the inputs are raw video and raw audio data, we need to encode them.
This will be translated to the FFmpeg command line:
Putting it all together:
All this together results in the command line:
Global options
FFmpeg is quite talkative by default and we want to be notified only about errors. We can add global options that will be placed before any inputs:
The inputs and outputs are left out for the sake of brevity. Now we have out FFmpeg command line complete:
Control
The process config also allows you to control what happens, when you create the process, what happens after the process finishes:
The reconnect
option tells the datarhei Core to restart the process in case it finished (either normally or because of an error). It will wait for reconnect_delay_seconds
until the process will be restarted. Set this value to 0
in order to restart the process immediately.
The autostart
option will cause the process to be started as soon it has been created. This is equivalent to setting this option to false
, creating the process, and issuing the start command.
The stale_timeout_seconds
will cause the process to be (forcefully) stopped in case it stales, i.e. no packets are processed for this amount of seconds. Disable this feature by setting the value to 0
.
Limits
The datarhei Core is constantly monitoring the vitals of each process. This also includes the memory and CPU consumption. If you have limited resource or you want to have an upper limit for the resources a process is allowed to consume, you can set the limit options in the process config:
The cpu_usage
option sets the limit of CPU usage in percent, e.g. not more than 10% of the CPU should be used for this process. A value of 0 (the default) will disable this option.
The memory_mbytes
options set the limit of memory usage in megabytes, e.g. not more than 50 megabytes of memory should be used. A value of 0
(the default) will disable this option.
If the resource consumption for at least one of the limits is exceeded for waitfor_seconds
, then the process will be forcefully terminated.
Cleanup
For long running processes that produce a lot of files (e.g. a HLS live stream), it can happen that not all created files are removed by the ffmpeg process itself. Or if the process exits and doesn't or can't cleanup any files it created. This leaves files on filesystem that shouldn't be there and just using up space.
With the optional array of cleanup rules for each output, it is possible to define rules for removing files from the memory filesystem or disk. Each rule consists of a glob pattern and a max. allowed number of files matching that pattern or permitted maximum age for the files matching that pattern. The pattern starts with either memfs:
or diskfs:
depending on which filesystem this rule is designated to. Then a glob pattern follows to identify the files. If max_files
is set to a number larger than 0, then the oldest files from the matching files will be deleted if the list of matching files is longer than that number. If max_file_age_seconds
is set to a number larger than 0, then all files that are older than this number of seconds from the matching files will be deleted. If purge_on_delete
is set to true
, then all matching files will be deleted when the process is deleted.
As of version 16.12.0 the prefixes for selecting the filesystem (e.g. diskfs:
or memfs:
) correspond to the configured name of the filesystem, in case you mounted one or more S3 filesystems. Reserved names are disk
, diskfs
, mem
, and memfs
. The names disk
and mem
are synonyms for diskfs
, resp. memfs
. E.g. if you have a S3 filesystem with the name aws
mounted, use the aws:
prefix.
Optional cleanup configuration:
With the pattern
parameter you can select files based on a glob pattern, with the addition of the **
placeholder to include multiple subdirectories, e.g. selecting all .ts
files in the root directory has the pattern /*.ts
, selecting all .ts
file in the whole filesystem has the pattern /**.ts
.
As part of the pattern you can use placeholders.
Examples:
The file on the disk with the ID of the process as the name and the extension .m3u8
.
All files on disk whose names starts with the ID of the process and that have the extension .m3u8
or .ts
.
All files on disk that have the extension .ts and are in the folder structure denoted by the process' reference, ID, and the ID of the output this cleanup rule belongs to.
All files whose name starts with the reference of the process, e.g. /abc_1.ts
, but not /abc/1.ts
.
All files whose name or path starts with the reference of the process, e.g. /abc_1.ts, /abc/1.ts
, or /abc_data/foobar/42.txt
.
References
References allow you to refer to an output of another process and to use it as input. The address of the input has to be in the form #[processid]:output=[id]
, where [processid]
denotes the ID of the process you want to use the output from, and [id]
denotes the ID of output of that process.
Example:
The second process will use rtmp://someip/live/stream
as its input address.
Placeholder
Placeholders are a way to parametrize parts of the config. A placeholder is surrounded by curly braces, e.g. {processid}
.
Some placeholder require parameters. Add parameters to a placeholder by appending a comma separated list of key/values, e.g. {placeholder,key1=value1,key2=value2}
. This can be combined with escaping.
As of version 16.12.0 the value for a parameter of a placeholder can be a variable. Currently known variables are $processid
and $reference
. These will be replaced by the respective process ID and process reference, e.g. {rtmp,name=$processid.stream}
.
Example:
Assume you have an input process that gets encoded in three different resolutions that are written to the disk. With placeeholders you parametrize the output files. The input and the encoding options are left out for brevity.
This will create three files with the names a_b_360.mp4
, a_b_720.mp4
, and a_b_1080.mp4
to the directory as defined in storage.disk.dir.
In case you use a placeholder in a place where characters needs escaping (e.g. in the options of the tee
output muxer), you can define the character to be escaped in the placeholder by adding it to the placeholder name and prefix it with a ^
.
Example: you have a process with the ID abc:snapshot
and in a filter option you have to escape all :
in the value for the {processid}
placeholder, write {processid^:}
. It will then be replaced by abc\:snapshot
. The escape character is always \
. In case there are \
in the value, they will also get escaped.
All known placeholders are:
{processid}
Will be replaced by the ID of the process. Locations where this placeholder can be used: input.id
, input.address
, input.options
, output.id
, output.address
, output.options
, output.cleanup.pattern
{reference}
Will be replaced by the reference of the process. Locations where this placeholder can be used: input.id
, input.address
, input.options
, output.id
, output.address
, output.options
, output.cleanup.pattern
{inputid}
Will be replaced by the ID of the input. Locations where this placeholder can be used: input.address
, input.options
{outputid}
Will be replaced by the ID of the output. Locations where this placeholder can be used: output.address
, output.options
, output.cleanup.pattern
{diskfs}
Will be replaced by the provided value of storage.disk.dir. Locations where this placeholder can be used: options
, input.address
, input.options
, output.address
, output.options
As of version 16.12.0 you can use the alternative syntax {fs:disk}
.
{memfs}
Will be replaced by the internal base URL of the memory filesystem. This placeholder is convenient if you change, e.g. the listening port of the HTTP server. Then you don't need to modifiy each process configuration where you use the memory filesystem.
Locations where this placeholder can be used: input.address
, input.options
, output.address
, output.options
Example: {memfs}/foobar.m3u8
will be replaced with http://127.0.0.1:8080/memfs/foobar.m3u8
if the datarhei Core is listening on 8080 for HTTP requests.
As of version 16.12.0 you can use the alternative syntax {fs:mem}
.
{fs:[name]}
As of version 16.12.0. Will be replaced by the internal base URL of the named filesystem. This placeholder is convenient if you change, e.g. the listening port of the HTTP server. Then you don't need to modifiy each process configuration where you use the memory filesystem.
Locations where this placeholder can be used: input.address
, input.options
, output.address
, output.options
Predefined names are disk
and mem
. Additional names correspond to the names of the mounted S3 filesystems.
Example: {fs:aws}/foobar.m3u8
will be replaced with http://127.0.0.1:8080/awsfs/foobar.m3u8
if the datarhei Core is listening on 8080 for HTTP requests, and the S3 storage is configured to have name aws
and the mountpoin /awsfs
.
{rtmp}
Will be replaced by the internal address of the RTMP server. This placeholder is convenient if you change, e.g. the listening port or the app of the RTMP server. Then you don't need to modifiy each process configuration where you use the RTMP server.
Required parameter: name
(name of the stream)
Locations where this placeholder can be used: input.address
, output.address
Example: {rtmp,name=foobar.stream}
will be replaced with rtmp://127.0.0.1:1935/live/foobar.stream?token=abc
, if the RTMP server is configured to listen on port 1935, has the app /live and requires the token abc
. See the RTMP configuration.
{srt}
Will be replaced by the internal address of the SRT server. This placeholder is convenient if you change, e.g. the listening port or the app of the SRT server. Then you don't need to modifiy each process configuration where you use the SRT server.
Required parameters: name
(name of the resource), mode
(either publish
or request
)
Locations where this placeholder can be used: input.address
, output.address
Example: {srt,name=foobar,mode=request}
will be replaced with srt://127.0.0.1:6000?mode=caller&transtype=live&streamid=foobar,mode:request,token=abc&passphrase=1234
, if the SRT server is configured to listen on port 6000, requires the token abc
and the passphrase 1234
. See the SRT configuration.
Requires Core v16.11.0+
As of version 16.12.0 the placeholder accepts the parameter latency
and defaults to 20ms.
Create
Create a new process. The ID of the process will be required in order to query or manipulate the process later on. If you don't provide an ID, it will be generated for you. The response of the successful API call includes the process config as it has been stored (including the generated ID).
You can control the process with commands.
Example:
Description:
Read
These API call lets you query the current state of the processes that are registered on the datarhei Core. For each process there are several aspects available, such as:
config
The config with which the process has been created.state
The current state of the process, e.g. if its currently running and for how long. If a process is running also the progress data is included. This includes a list of all input and output streams with all their vitals (frames, bitrate, codec, ...). More details.report
The logging output from the FFmpeg process and a history of previous runs of the same process. More details.metadata
All metadata associated with this process. More details.
By default all aspects are included in the process listing. If you are only interested in specific aspects, then you can use the ?filter=...
query parameter. Provide it a comma-separated list of aspects and then only those will be included in the response, e.g. ?filter=state,report
.
List processes
This API call lists all processes that are registered on the datarhei Core. You can restrict the listed processes by providing
a comma-separated list of specific IDs (
?id=a,b,c
)a reference (
?reference=...
)a pattern for the matching ID (
?idpattern=...
)a pattern for the matching references (
?refpattern=...
)
With the idpattern
and refpattern query parameter you can select process IDs and/or references based on a glob pattern. If you provide a list of specific IDs or a reference and patterns for IDs or references, then the patterns will be matched first. The resulting list of IDs and references is then checked against the provided list of IDs or reference.
Example:
Description:
Process by ID
If you know the ID of the process you want the details about, you can fetch them directly. Here you can apply the filter
query parameter regarding the aspects.
Example:
Description:
Process config by ID
This endpoint lets you fetch directly the config of a process.
Example:
Description:
Update
You can change the process configuration of an existing process. It doesn't matter if the process to be updated is currently running or not. The current order will be transfered to the updated process.
The new process configuration is not required to have the same ID as the one you're about to replace. After the successful update you have to use the new process ID in order to query or manipulate the process.
The new process configuration is checked for its validity before it will be replace the current process configuration.
As of version 16.12.0 you can provide a partial process config for updates, i.e. you need to PUT
only those fields that actually change.
Example:
Description:
Delete
Delete a process. If the process is currently running, it will be stopped gracefully before it will be removed.
Example:
Description:
Last updated