Process

Manage FFmpeg processes

The /api/v3/process family of API call will let you manage and monitor FFmpeg processes by the datarhei Core.

An FFmpeg process definition, as required by the API, consists of its inputs and its outputs and global options. That's a very minimalistic abstraction of the FFmpeg command line and assumes that you know the command line options in order to achieve what you want.

The most minimal process definition is:

{
   "input": [{"address": "input"}],
   "output": [{"address": "output"}]
}

This will be translated to the FFmpeg command line:

-i input output

Let's use this as a starting point for a more practical example. We want to generate a test video with silence audio, encode it to H.264 and AAC and finally send it via RTMP to some destination.

Identification

You can give each process an ID with the field id. There are no restrictions regarding the format or allowed characters. If you don't provide an ID, one will be generated for you. The ID is used to identify the process later in API calls for querying and modifying the process after it has been created. An ID has to be unique for each datarhei Core.

Additionally to the ID you can provide a reference with the field reference. This allows you provide any information with the process. It will not be interpreted at all. You can use it for e.g. grouping different processes together.

{
    "id": "some_id",
    "reference": "some reference"
}

Inputs

First, we define the inputs:

[
    {             
        "id": "video_in",
        "address": "testsrc=size=1280x720:rate=25",
        "options": ["-f", "lavfi", "-re"]
    },
    {             
        "id": "audio_in",
        "address": "anullsrc=r=44100:stereo=25",
        "options": ["-f", "lavfi"]
    },
]

This will be translated to the FFmpeg command line:

-f lavfi -re -i testsrc=size=1280x720:rate=25 -f lavfi anullsrc=r=44100:stereo=25

The id for each input is optional and will be used for later reference or for replacing placeholders (more about placeholders later). If you don't provide an ID for an input, it will be generated for you.

Outputs

Next, we define the output. Because both the inputs are raw video and raw audio data, we need to encode them.

[
    {
        "id": "out",
        "address": "rtmp://someip/live/stream",
        "options": [
            "-codec:v", "libx264",
            "-r", "25",
            "-g", "50",
            "-preset:v", "ultrafast",
            "-b:v", "2M",
            "-codec:a", "aac",
            "-b:a", "64k",
            "-f", "flv",
        ]
    }
]

This will be translated to the FFmpeg command line:

-codec:v libx264 -r 25 -g 50 -preset:v ultrafast -b:v 2M -codec:a aac -b:a 64k -f flv rtmp://someip/live/stream 

Putting it all together:

{
    "input": [
        {             
            "id": "video_in",
            "address": "testsrc=size=1280x720:rate=25",
            "options": ["-f", "lavfi", "-re"]
        },
        {             
            "id": "audio_in",
            "address": "anullsrc=r=44100:stereo=25",
            "options": ["-f", "lavfi"]
        },
    ],
    "output": [
        {
            "id": "out",
            "address": "rtmp://someip/live/stream",
            "options": [
                "-codec:v", "libx264",
                "-r", "25",
                "-g", "50",
                "-preset:v", "ultrafast",
                "-b:v", "2M",
                "-codec:a", "aac",
                "-b:a", "64k",
                "-f", "flv"
            ]
        }
    ]
}

All this together results in the command line:

-f lavfi -re -i testsrc=size=1280x720:rate=25 -f lavfi anullsrc=r=44100:stereo=25 -codec:v libx264 -r 25 -g 50 -preset:v ultrafast -b:v 2M -codec:a aac -b:a 64k -f flv rtmp://someip/live/stream 

Global options

FFmpeg is quite talkative by default and we want to be notified only about errors. We can add global options that will be placed before any inputs:

{
    "options": ["-loglevel", "error"],
    "input": [...],
    "output": [...]
}

The inputs and outputs are left out for the sake of brevity. Now we have out FFmpeg command line complete:

-loglevel error -f lavfi -re -i testsrc=size=1280x720:rate=25 -f lavfi anullsrc=r=44100:stereo=25 -codec:v libx264 -r 25 -g 50 -preset:v ultrafast -b:v 2M -codec:a aac -b:a 64k -f flv rtmp://someip/live/stream 

Control

The process config also allows you to control what happens, when you create the process, what happens after the process finishes:

{
    "reconnect": true,
    "reconnect_delay_seconds": 10,
    "autostart": true,
    "stale_timeout_seconds": 15,
    "options": [...],
    "input": [...],
    "output": [...]
}

The reconnect option tells the datarhei Core to restart the process in case it finished (either normally or because of an error). It will wait for reconnect_delay_seconds until the process will be restarted. Set this value to 0 in order to restart the process immediately.

The autostart option will cause the process to be started as soon it has been created. This is equivalent to setting this option to false, creating the process, and issuing the start command.

The stale_timeout_seconds will cause the process to be (forcefully) stopped in case it stales, i.e. no packets are processed for this amount of seconds. Disable this feature by setting the value to 0.

Limits

The datarhei Core is constantly monitoring the vitals of each process. This also includes the memory and CPU consumption. If you have limited resource or you want to have an upper limit for the resources a process is allowed to consume, you can set the limit options in the process config:

{
    "limits": {
        "cpu_usage": 10,
        "memory_mbytes": 50,
        "waitfor_seconds": 30,
    },
    "options": [...],
    "input": [...],
    "output": [...]
}

The cpu_usage option sets the limit of CPU usage in percent, e.g. not more than 10% of the CPU should be used for this process. A value of 0 (the default) will disable this option.

The memory_mbytes options set the limit of memory usage in megabytes, e.g. not more than 50 megabytes of memory should be used. A value of 0 (the default) will disable this option.

If the resource consumption for at least one of the limits is exceeded for waitfor_seconds, then the process will be forcefully terminated.

Cleanup

For long running processes that produce a lot of files (e.g. a HLS live stream), it can happen that not all created files are removed by the ffmpeg process itself. Or if the process exits and doesn't or can't cleanup any files it created. This leaves files on filesystem that shouldn't be there and just using up space.

With the optional array of cleanup rules for each output, it is possible to define rules for removing files from the memory filesystem or disk. Each rule consists of a glob pattern and a max. allowed number of files matching that pattern or permitted maximum age for the files matching that pattern. The pattern starts with either memfs: or diskfs: depending on which filesystem this rule is designated to. Then a glob pattern follows to identify the files. If max_files is set to a number larger than 0, then the oldest files from the matching files will be deleted if the list of matching files is longer than that number. If max_file_age_seconds is set to a number larger than 0, then all files that are older than this number of seconds from the matching files will be deleted. If purge_on_delete is set to true, then all matching files will be deleted when the process is deleted.

As of version 16.12.0 the prefixes for selecting the filesystem (e.g. diskfs: or memfs:) correspond to the configured name of the filesystem, in case you mounted one or more S3 filesystems. Reserved names are disk, diskfs, mem, and memfs. The names disk and mem are synonyms for diskfs, resp. memfs. E.g. if you have a S3 filesystem with the name aws mounted, use the aws: prefix.

Optional cleanup configuration:

{
   "output": [
      {
         "cleanup": [{
            "pattern": "memfs:fo*ar",
            "max_files": "23",
            "max_file_age_seconds": "235",
            "purge_on_delete": true
         }],
      }
   ],
}

With the pattern parameter you can select files based on a glob pattern, with the addition of the ** placeholder to include multiple subdirectories, e.g. selecting all .ts files in the root directory has the pattern /*.ts, selecting all .ts file in the whole filesystem has the pattern /**.ts.

As part of the pattern you can use placeholders.

Examples:

diskfs:/{processid}.m3u8

The file on the disk with the ID of the process as the name and the extension .m3u8.

diskfs:/{processid}*.(m3u8|ts)

All files on disk whose names starts with the ID of the process and that have the extension .m3u8 or .ts.

diskfs:/{reference}/{processid}/{outputid}/*.ts

All files on disk that have the extension .ts and are in the folder structure denoted by the process' reference, ID, and the ID of the output this cleanup rule belongs to.

diskfs:/{reference}*

All files whose name starts with the reference of the process, e.g. /abc_1.ts, but not /abc/1.ts.

diskfs:/{reference}**

All files whose name or path starts with the reference of the process, e.g. /abc_1.ts, /abc/1.ts, or /abc_data/foobar/42.txt.

References

References allow you to refer to an output of another process and to use it as input. The address of the input has to be in the form #[processid]:output=[id], where [processid] denotes the ID of the process you want to use the output from, and [id] denotes the ID of output of that process.

Example:

Process 1
{
    "id": "process_1",
    "input": [
        {             
            "id": "video_in",
            "address": "testsrc=size=1280x720:rate=25",
            "options": ["-f", "lavfi", "-re"]
        }
    ],
    "output": [
        {
            "id": "out",
            "address": "rtmp://someip/live/stream",
            "options": [
                "-codec:v", "libx264",
                "-r", "25",
                "-g", "50",
                "-preset:v", "ultrafast",
                "-b:v", "2M",
                "-f", "flv"
            ]
        }
    ]
}
Process 2
{
    "id": "process_2",
    "input": [
        {             
            "id": "video_in",
            "address": "#process_1:output=out"
        }
    ],
    "output": [
        {
            "id": "out",
            "address": "-",
            "options": [
                "-codec", "copy",
                "-f", "null"
            ]
        }
    ]
}

The second process will use rtmp://someip/live/stream as its input address.

Placeholder

Placeholders are a way to parametrize parts of the config. A placeholder is surrounded by curly braces, e.g. {processid}.

Some placeholder require parameters. Add parameters to a placeholder by appending a comma separated list of key/values, e.g. {placeholder,key1=value1,key2=value2}. This can be combined with escaping.

As of version 16.12.0 the value for a parameter of a placeholder can be a variable. Currently known variables are $processid and $reference. These will be replaced by the respective process ID and process reference, e.g. {rtmp,name=$processid.stream}.

Example:

Assume you have an input process that gets encoded in three different resolutions that are written to the disk. With placeeholders you parametrize the output files. The input and the encoding options are left out for brevity.

{
   "id": "b",
   "reference": "a",
   "input: [...],
   "output": [
      {
         "id": "360",
         "address": "{diskfs}/{reference}_{processid}_{outputid}.mp4
      },
      {
         "id": "720",
         "address": "{diskfs}/{reference}_{processid}_{outputid}.mp4
      },
      {
         "id": "1080",
         "address": "{diskfs}/{reference}_{processid}_{outputid}.mp4
      }
   ],
}

This will create three files with the names a_b_360.mp4, a_b_720.mp4, and a_b_1080.mp4 to the directory as defined in storage.disk.dir.

In case you use a placeholder in a place where characters needs escaping (e.g. in the options of the tee output muxer), you can define the character to be escaped in the placeholder by adding it to the placeholder name and prefix it with a ^.

Example: you have a process with the ID abc:snapshot and in a filter option you have to escape all : in the value for the {processid} placeholder, write {processid^:}. It will then be replaced by abc\:snapshot. The escape character is always \. In case there are \ in the value, they will also get escaped.

All known placeholders are:

{processid}

Will be replaced by the ID of the process. Locations where this placeholder can be used: input.id, input.address, input.options, output.id, output.address, output.options, output.cleanup.pattern

{reference}

Will be replaced by the reference of the process. Locations where this placeholder can be used: input.id, input.address, input.options, output.id, output.address, output.options, output.cleanup.pattern

{inputid}

Will be replaced by the ID of the input. Locations where this placeholder can be used: input.address, input.options

{outputid}

Will be replaced by the ID of the output. Locations where this placeholder can be used: output.address, output.options, output.cleanup.pattern

{diskfs}

Will be replaced by the provided value of storage.disk.dir. Locations where this placeholder can be used: options, input.address, input.options, output.address, output.options

As of version 16.12.0 you can use the alternative syntax {fs:disk}.

{memfs}

Will be replaced by the internal base URL of the memory filesystem. This placeholder is convenient if you change, e.g. the listening port of the HTTP server. Then you don't need to modifiy each process configuration where you use the memory filesystem.

Locations where this placeholder can be used: input.address, input.options, output.address, output.options

Example: {memfs}/foobar.m3u8 will be replaced with http://127.0.0.1:8080/memfs/foobar.m3u8 if the datarhei Core is listening on 8080 for HTTP requests.

As of version 16.12.0 you can use the alternative syntax {fs:mem}.

{fs:[name]}

As of version 16.12.0. Will be replaced by the internal base URL of the named filesystem. This placeholder is convenient if you change, e.g. the listening port of the HTTP server. Then you don't need to modifiy each process configuration where you use the memory filesystem.

Locations where this placeholder can be used: input.address, input.options, output.address, output.options

Predefined names are disk and mem. Additional names correspond to the names of the mounted S3 filesystems.

Example: {fs:aws}/foobar.m3u8 will be replaced with http://127.0.0.1:8080/awsfs/foobar.m3u8 if the datarhei Core is listening on 8080 for HTTP requests, and the S3 storage is configured to have name aws and the mountpoin /awsfs.

{rtmp}

Will be replaced by the internal address of the RTMP server. This placeholder is convenient if you change, e.g. the listening port or the app of the RTMP server. Then you don't need to modifiy each process configuration where you use the RTMP server.

Required parameter: name (name of the stream)

Locations where this placeholder can be used: input.address, output.address

Example: {rtmp,name=foobar.stream} will be replaced with rtmp://127.0.0.1:1935/live/foobar.stream?token=abc, if the RTMP server is configured to listen on port 1935, has the app /live and requires the token abc. See the RTMP configuration.

{srt}

Will be replaced by the internal address of the SRT server. This placeholder is convenient if you change, e.g. the listening port or the app of the SRT server. Then you don't need to modifiy each process configuration where you use the SRT server.

Required parameters: name (name of the resource), mode (either publish or request)

Locations where this placeholder can be used: input.address, output.address

Example: {srt,name=foobar,mode=request} will be replaced with srt://127.0.0.1:6000?mode=caller&transtype=live&streamid=foobar,mode:request,token=abc&passphrase=1234, if the SRT server is configured to listen on port 6000, requires the token abc and the passphrase 1234. See the SRT configuration.

Requires Core v16.11.0+

As of version 16.12.0 the placeholder accepts the parameter latency and defaults to 20ms.

Create

Create a new process. The ID of the process will be required in order to query or manipulate the process later on. If you don't provide an ID, it will be generated for you. The response of the successful API call includes the process config as it has been stored (including the generated ID).

You can control the process with commands.

Example:

curl http://127.0.0.1:8080/api/v3/process \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X POST \
   -d '{
         "id": "test",
         "options": ["-loglevel", "error"],
         "input": [
            {
               "address": "testsrc=size=1280x720:rate=25",
               "id": "0",
               "options": ["-f", "lavfi", "-re"]
            }
         ],
         "output": [
            {
               "address": "-",
               "id": "0",
               "options": ["-c:v", "libx264", "-f", "null"]
            }
         ]
      }'

Description:

Read

These API call lets you query the current state of the processes that are registered on the datarhei Core. For each process there are several aspects available, such as:

  • config The config with which the process has been created.

  • state The current state of the process, e.g. if its currently running and for how long. If a process is running also the progress data is included. This includes a list of all input and output streams with all their vitals (frames, bitrate, codec, ...). More details.

  • report The logging output from the FFmpeg process and a history of previous runs of the same process. More details.

  • metadata All metadata associated with this process. More details.

By default all aspects are included in the process listing. If you are only interested in specific aspects, then you can use the ?filter=... query parameter. Provide it a comma-separated list of aspects and then only those will be included in the response, e.g. ?filter=state,report.

List processes

This API call lists all processes that are registered on the datarhei Core. You can restrict the listed processes by providing

  • a comma-separated list of specific IDs (?id=a,b,c)

  • a reference (?reference=...)

  • a pattern for the matching ID (?idpattern=...)

  • a pattern for the matching references (?refpattern=...)

With the idpattern and refpattern query parameter you can select process IDs and/or references based on a glob pattern. If you provide a list of specific IDs or a reference and patterns for IDs or references, then the patterns will be matched first. The resulting list of IDs and references is then checked against the provided list of IDs or reference.

Example:

curl http://127.0.0.1:8080/api/v3/process \
   -H 'accept: application/json' \
   -X GET

Description:

Process by ID

If you know the ID of the process you want the details about, you can fetch them directly. Here you can apply the filter query parameter regarding the aspects.

Example:

curl http://127.0.0.1:8080/api/v3/process/test \
   -H 'accept: application/json' \
   -X GET

Description:

Process config by ID

This endpoint lets you fetch directly the config of a process.

Example:

curl http://127.0.0.1:8080/api/v3/process/test/config \
   -H 'accept: application/json' \
   -X GET

Description:

Update

You can change the process configuration of an existing process. It doesn't matter if the process to be updated is currently running or not. The current order will be transfered to the updated process.

The new process configuration is not required to have the same ID as the one you're about to replace. After the successful update you have to use the new process ID in order to query or manipulate the process.

The new process configuration is checked for its validity before it will be replace the current process configuration.

As of version 16.12.0 you can provide a partial process config for updates, i.e. you need to PUT only those fields that actually change.

Example:

curl http://127.0.0.1:8080/api/v3/process/test \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X PUT \
   -d '{
         "id": "test",
         "options": ["-loglevel", "info"],
         "input": [
            {
               "address": "testsrc=size=1280x720:rate=25",
               "id": "0",
               "options": ["-f", "lavfi", "-re"]
            }
         ],
         "output": [
            {
               "address": "-",
               "id": "0",
               "options": ["-c:v", "libx264", "-f", "null"]
            }
         ]
      }'

Description:

Delete

Delete a process. If the process is currently running, it will be stopped gracefully before it will be removed.

Example:

curl http://127.0.0.1:8080/api/v3/process/test \
   -H 'accept: application/json' \
   -H 'Content-Type: application/json' \
   -X DELETE

Description:

Last updated