The process state reflects the current vitals of an process. This includes for example the runtime the process is already in this state, the order (if it should be running or stopped, see Command), the current CPU and memory consumption, the actual FFmpeg command line, and some more.
If the process is running you will recieve progress data additionally to the above mentioned metrics. Progress data includes for each input and output stream the bitrate, framerate, bytes read/written, the speed of the processing, duplicated, dropped frames, and so on.
In the following an example output for a running process:
Example: read the state of a process with the ID test:
Description:
Manage FFmpeg processes
The /api/v3/process
family of API call will let you manage and monitor FFmpeg processes by the datarhei Core.
An FFmpeg process definition, as required by the API, consists of its inputs and its outputs and global options. That's a very minimalistic abstraction of the FFmpeg command line and assumes that you know the command line options in order to achieve what you want.
The most minimal process definition is:
This will be translated to the FFmpeg command line:
Let's use this as a starting point for a more practical example. We want to generate a test video with silence audio, encode it to H.264 and AAC and finally send it via RTMP to some destination.
You can give each process an ID with the field id
. There are no restrictions regarding the format or allowed characters. If you don't provide an ID, one will be generated for you. The ID is used to identify the process later in API calls for querying and modifying the process after it has been created. An ID has to be unique for each datarhei Core.
Additionally to the ID you can provide a reference with the field reference
. This allows you provide any information with the process. It will not be interpreted at all. You can use it for e.g. grouping different processes together.
First, we define the inputs:
This will be translated to the FFmpeg command line:
The id
for each input is optional and will be used for later reference or for replacing placeholders (more about placeholders later). If you don't provide an ID for an input, it will be generated for you.
Next, we define the output. Because both the inputs are raw video and raw audio data, we need to encode them.
This will be translated to the FFmpeg command line:
Putting it all together:
All this together results in the command line:
FFmpeg is quite talkative by default and we want to be notified only about errors. We can add global options that will be placed before any inputs:
The inputs and outputs are left out for the sake of brevity. Now we have out FFmpeg command line complete:
The process config also allows you to control what happens, when you create the process, what happens after the process finishes:
The reconnect
option tells the datarhei Core to restart the process in case it finished (either normally or because of an error). It will wait for reconnect_delay_seconds
until the process will be restarted. Set this value to 0
in order to restart the process immediately.
The autostart
option will cause the process to be started as soon it has been created. This is equivalent to setting this option to false
, creating the process, and issuing the start command.
The stale_timeout_seconds
will cause the process to be (forcefully) stopped in case it stales, i.e. no packets are processed for this amount of seconds. Disable this feature by setting the value to 0
.
The datarhei Core is constantly monitoring the vitals of each process. This also includes the memory and CPU consumption. If you have limited resource or you want to have an upper limit for the resources a process is allowed to consume, you can set the limit options in the process config:
The cpu_usage
option sets the limit of CPU usage in percent, e.g. not more than 10% of the CPU should be used for this process. A value of 0 (the default) will disable this option.
The memory_mbytes
options set the limit of memory usage in megabytes, e.g. not more than 50 megabytes of memory should be used. A value of 0
(the default) will disable this option.
If the resource consumption for at least one of the limits is exceeded for waitfor_seconds
, then the process will be forcefully terminated.
For long running processes that produce a lot of files (e.g. a HLS live stream), it can happen that not all created files are removed by the ffmpeg process itself. Or if the process exits and doesn't or can't cleanup any files it created. This leaves files on filesystem that shouldn't be there and just using up space.
With the optional array of cleanup rules for each output, it is possible to define rules for removing files from the memory filesystem or disk. Each rule consists of a glob pattern and a max. allowed number of files matching that pattern or permitted maximum age for the files matching that pattern. The pattern starts with either memfs:
or diskfs:
depending on which filesystem this rule is designated to. Then a glob pattern follows to identify the files. If max_files
is set to a number larger than 0, then the oldest files from the matching files will be deleted if the list of matching files is longer than that number. If max_file_age_seconds
is set to a number larger than 0, then all files that are older than this number of seconds from the matching files will be deleted. If purge_on_delete
is set to true
, then all matching files will be deleted when the process is deleted.
As of version 16.12.0 the prefixes for selecting the filesystem (e.g. diskfs:
or memfs:
) correspond to the configured name of the filesystem, in case you mounted one or more S3 filesystems. Reserved names are disk
, diskfs
, mem
, and memfs
. The names disk
and mem
are synonyms for diskfs
, resp. memfs
. E.g. if you have a S3 filesystem with the name aws
mounted, use the aws:
prefix.
Optional cleanup configuration:
With the pattern
parameter you can select files based on a glob pattern, with the addition of the **
placeholder to include multiple subdirectories, e.g. selecting all .ts
files in the root directory has the pattern /*.ts
, selecting all .ts
file in the whole filesystem has the pattern /**.ts
.
As part of the pattern you can use placeholders.
Examples:
The file on the disk with the ID of the process as the name and the extension .m3u8
.
All files on disk whose names starts with the ID of the process and that have the extension .m3u8
or .ts
.
All files on disk that have the extension .ts and are in the folder structure denoted by the process' reference, ID, and the ID of the output this cleanup rule belongs to.
All files whose name starts with the reference of the process, e.g. /abc_1.ts
, but not /abc/1.ts
.
All files whose name or path starts with the reference of the process, e.g. /abc_1.ts, /abc/1.ts
, or /abc_data/foobar/42.txt
.
References allow you to refer to an output of another process and to use it as input. The address of the input has to be in the form #[processid]:output=[id]
, where [processid]
denotes the ID of the process you want to use the output from, and [id]
denotes the ID of output of that process.
Example:
The second process will use rtmp://someip/live/stream
as its input address.
Placeholders are a way to parametrize parts of the config. A placeholder is surrounded by curly braces, e.g. {processid}
.
Some placeholder require parameters. Add parameters to a placeholder by appending a comma separated list of key/values, e.g. {placeholder,key1=value1,key2=value2}
. This can be combined with escaping.
As of version 16.12.0 the value for a parameter of a placeholder can be a variable. Currently known variables are $processid
and $reference
. These will be replaced by the respective process ID and process reference, e.g. {rtmp,name=$processid.stream}
.
Example:
Assume you have an input process that gets encoded in three different resolutions that are written to the disk. With placeeholders you parametrize the output files. The input and the encoding options are left out for brevity.
This will create three files with the names a_b_360.mp4
, a_b_720.mp4
, and a_b_1080.mp4
to the directory as defined in storage.disk.dir.
In case you use a placeholder in a place where characters needs escaping (e.g. in the options of the tee
output muxer), you can define the character to be escaped in the placeholder by adding it to the placeholder name and prefix it with a ^
.
Example: you have a process with the ID abc:snapshot
and in a filter option you have to escape all :
in the value for the {processid}
placeholder, write {processid^:}
. It will then be replaced by abc\:snapshot
. The escape character is always \
. In case there are \
in the value, they will also get escaped.
All known placeholders are:
Will be replaced by the ID of the process. Locations where this placeholder can be used: input.id
, input.address
, input.options
, output.id
, output.address
, output.options
, output.cleanup.pattern
Will be replaced by the reference of the process. Locations where this placeholder can be used: input.id
, input.address
, input.options
, output.id
, output.address
, output.options
, output.cleanup.pattern
Will be replaced by the ID of the input. Locations where this placeholder can be used: input.address
, input.options
Will be replaced by the ID of the output. Locations where this placeholder can be used: output.address
, output.options
, output.cleanup.pattern
Will be replaced by the provided value of storage.disk.dir. Locations where this placeholder can be used: options
, input.address
, input.options
, output.address
, output.options
As of version 16.12.0 you can use the alternative syntax {fs:disk}
.
Will be replaced by the internal base URL of the memory filesystem. This placeholder is convenient if you change, e.g. the listening port of the HTTP server. Then you don't need to modifiy each process configuration where you use the memory filesystem.
Locations where this placeholder can be used: input.address
, input.options
, output.address
, output.options
Example: {memfs}/foobar.m3u8
will be replaced with http://127.0.0.1:8080/memfs/foobar.m3u8
if the datarhei Core is listening on 8080 for HTTP requests.
As of version 16.12.0 you can use the alternative syntax {fs:mem}
.
As of version 16.12.0. Will be replaced by the internal base URL of the named filesystem. This placeholder is convenient if you change, e.g. the listening port of the HTTP server. Then you don't need to modifiy each process configuration where you use the memory filesystem.
Locations where this placeholder can be used: input.address
, input.options
, output.address
, output.options
Predefined names are disk
and mem
. Additional names correspond to the names of the mounted S3 filesystems.
Example: {fs:aws}/foobar.m3u8
will be replaced with http://127.0.0.1:8080/awsfs/foobar.m3u8
if the datarhei Core is listening on 8080 for HTTP requests, and the S3 storage is configured to have name aws
and the mountpoin /awsfs
.
Will be replaced by the internal address of the RTMP server. This placeholder is convenient if you change, e.g. the listening port or the app of the RTMP server. Then you don't need to modifiy each process configuration where you use the RTMP server.
Required parameter: name
(name of the stream)
Locations where this placeholder can be used: input.address
, output.address
Example: {rtmp,name=foobar.stream}
will be replaced with rtmp://127.0.0.1:1935/live/foobar.stream?token=abc
, if the RTMP server is configured to listen on port 1935, has the app /live and requires the token abc
. See the RTMP configuration.
Will be replaced by the internal address of the SRT server. This placeholder is convenient if you change, e.g. the listening port or the app of the SRT server. Then you don't need to modifiy each process configuration where you use the SRT server.
Required parameters: name
(name of the resource), mode
(either publish
or request
)
Locations where this placeholder can be used: input.address
, output.address
Example: {srt,name=foobar,mode=request}
will be replaced with srt://127.0.0.1:6000?mode=caller&transtype=live&streamid=foobar,mode:request,token=abc&passphrase=1234
, if the SRT server is configured to listen on port 6000, requires the token abc
and the passphrase 1234
. See the SRT configuration.
Requires Core v16.11.0+
As of version 16.12.0 the placeholder accepts the parameter latency
and defaults to 20ms.
Create a new process. The ID of the process will be required in order to query or manipulate the process later on. If you don't provide an ID, it will be generated for you. The response of the successful API call includes the process config as it has been stored (including the generated ID).
You can control the process with commands.
Example:
Description:
These API call lets you query the current state of the processes that are registered on the datarhei Core. For each process there are several aspects available, such as:
config
The config with which the process has been created.
state
The current state of the process, e.g. if its currently running and for how long. If a process is running also the progress data is included. This includes a list of all input and output streams with all their vitals (frames, bitrate, codec, ...). More details.
report
The logging output from the FFmpeg process and a history of previous runs of the same process. More details.
metadata
All metadata associated with this process. More details.
By default all aspects are included in the process listing. If you are only interested in specific aspects, then you can use the ?filter=...
query parameter. Provide it a comma-separated list of aspects and then only those will be included in the response, e.g. ?filter=state,report
.
This API call lists all processes that are registered on the datarhei Core. You can restrict the listed processes by providing
a comma-separated list of specific IDs (?id=a,b,c
)
a reference (?reference=...
)
a pattern for the matching ID (?idpattern=...
)
a pattern for the matching references (?refpattern=...
)
With the idpattern
and refpattern query parameter you can select process IDs and/or references based on a glob pattern. If you provide a list of specific IDs or a reference and patterns for IDs or references, then the patterns will be matched first. The resulting list of IDs and references is then checked against the provided list of IDs or reference.
Example:
Description:
If you know the ID of the process you want the details about, you can fetch them directly. Here you can apply the filter
query parameter regarding the aspects.
Example:
The second paramter of client.Process
is the list of aspects. An empty list means all aspects.
Description:
This endpoint lets you fetch directly the config of a process.
Example:
Description:
You can change the process configuration of an existing process. It doesn't matter if the process to be updated is currently running or not. The current order will be transfered to the updated process.
The new process configuration is not required to have the same ID as the one you're about to replace. After the successful update you have to use the new process ID in order to query or manipulate the process.
The new process configuration is checked for its validity before it will be replace the current process configuration.
As of version 16.12.0 you can provide a partial process config for updates, i.e. you need to PUT
only those fields that actually change.
Example:
Client details
Description:
Delete a process. If the process is currently running, it will be stopped gracefully before it will be removed.
Example:
Description:
Probing a process means to detect the vitals of the inputs of a process, e.g. frames, bitrate, codec, ... for each input and stream, e.g. a video file on disk may contain two video streams (low and high resolution, audio and subtitle streams in different languages).
A process must already exists before it can be probed. During probing only the global FFmpeg options and the inputs are used to construct the FFmpeg command line.
The probe returns an object with an array of the detected streams and any array of lines from the output from the ffmpeg command.
In the following example we assume the process config with these inputs for the process test. Parts that are not relevant for probing have been left out for brevity:
The expected result would be:
The field index
refers to the input and the field stream
refers to the stream of an input.
Example: probe the inputs of a process with the ID test
:
Description:
The process report captures the log output from FFmpeg. The output is split up in a prelude and log output. The prelude is everything before the first progress line is printed. Everything after that is part of the log. The progress lines are not part of the report. Each log line comes with a timestamp of when it has been captured from the FFmpeg output.
The process report helps you to analyze any problems with the FFmpeg command line build from the inputs, outputs, and their options in the process configuration.
If the process is running, the FFmpeg logs will be written to the current report. As soon as the process finishes, the report will be moved to the history. The report history is also part of the response of this API endpoint.
You can define the number of log lines and how many reports should be kept in the history in the config for the datarhei Core in the ffmpeg.log section.
The following is an example of a report.
Example: read the report of a process with the ID test
:
Description:
Send a command to a process
There are basically two commands you can give to a process: start
or stop
. This is the order for the process.
Additionally to these two commands are the commands restart
which is sending a stop
followed by a start
command packed in one command, and reload
which is the same as if you would update the process config with itself, e.g. in order to update references to another process.
start
the process. If the process is already started, this won't have any effect.
stop
the process. If the process is already stopped, this won't have any effect.
restart
the process. If the process is not running, this won't have any effect.
reload
the process. If the process was running, the reloaded process will start automatically.
Example:
Description:
Store metadata in a process
The metadata for a process allows you to store arbitrary JSON data with that process, e.g. if you have an app that uses the datarhei Core API you can store app-specific information in the metadata.
In order not to conflict with other apps that might write to the metadata as well, you have to provide a key under which your metadata is stored. Think of a namespace or similar.
Add or update the metadata for a process and key.
Example: Write a metadata object into the metadata for the process test
and key desc
.
Description:
Read out the stored metadata. The key is optional. If the key is not provided, all stored metadata for that process will be in the result.
Example: read the metadata object for process test
that is stored under the key desc
.
Description:
Delete a process by its ID
Process ID
OK
Retrieve the previously stored JSON metadata under the given key. If the key is empty, all metadata will be returned.
Process ID
Key for data store
OK
Add arbitrary JSON metadata under the given key. If the key exists, all already stored metadata with this key will be overwritten. If the key doesn't exist, it will be created.
Process ID
Key for data store
Arbitrary JSON data. The null value will remove the key and its contents
OK
Issue a command to a process: start, stop, reload, restart
Process ID
Process command
OK
Probe an existing process to get a detailed stream information on the inputs.
Process ID
OK
video
audio
common
Get the logs and the log history of a process.
Process ID
OK
Get the configuration of a process. This is the configuration as provided by Add or Update.
Process ID
OK
Replace an existing process.
Process ID
Process config
OK
List all known processes. Use the query parameter to filter the listed processes.
Comma separated list of fields (config, state, report, metadata) that will be part of the output. If empty, all fields will be part of the output.
Return only these process that have this reference value. If empty, the reference will be ignored.
Comma separated list of process ids to list. Overrides the reference. If empty all IDs will be returned.
Glob pattern for process IDs. If empty all IDs will be returned. Intersected with results from refpattern.
Glob pattern for process references. If empty all IDs will be returned. Intersected with results from idpattern.
OK
kbit/s
kbit/s
General
Video
Audio
kbytes
kbit/s
General
Video
Audio
kbytes
kbytes
List a process by its ID. Use the filter parameter to specifiy the level of detail of the output.
Process ID
Comma separated list of fields (config, state, report, metadata) to be part of the output. If empty, all fields will be part of the output
OK
kbit/s
kbit/s
General
Video
Audio
kbytes
kbit/s
General
Video
Audio
kbytes
kbytes
Add a new FFmpeg process
Process config
OK
Get the state and progress data of a process.
Process ID
OK
kbit/s
kbit/s
General
Video
Audio
kbytes
kbit/s
General
Video
Audio
kbytes
kbytes