System Requirements
-
Requires Java 21
-
Use of Python-based Processors (beta feature) requires Python 3.9, 3.10, 3.11, or 3.12
-
Supported Operating Systems:
-
Redhat Enterprise Linux 8 or 9
-
Rocky Linux 8 or 9
-
-
Supported Web Browsers:
-
Microsoft Edge: Current & (Current - 1)
-
Mozilla FireFox: Current & (Current - 1)
-
Google Chrome: Current & (Current - 1)
-
Starting Clockspring
-
Linux
-
Install via RPM or decompress and untar into desired installation directory
-
Make any desired edits in files found under
<installdir>/conf -
From the
<installdir>/bindirectory, execute the following commands by typing./clockspring.sh <command>:-
start: starts Clockspring in the background -
stop: stops Clockspring that is running in the background -
status: provides the current status of Clockspring -
run: runs Clockspring in the foreground and waits for a Ctrl-C to initiate shutdown of Clockspring -
install: installs Clockspring as a service that can then be controlled via-
systemctl start clockspring -
systemctl stop clockspring -
systemctl status clockspring
-
-
-
When Clockspring first starts up, the following files and directories are created:
-
content_repository -
database_repository -
flowfile_repository -
provenance_repository -
workdirectory -
logsdirectory -
Within the
confdirectory, the flow.xml.gz file is created
See the System Properties section of this guide for more information about configuring repositories and configuration files.
Port Configuration
The following table lists the default ports used by Clockspring and the corresponding property in the clockspring.properties file.
| Function | Property | Default Value |
|---|---|---|
HTTPS Port |
|
|
Remote Input Socket Port* |
|
|
Cluster Node Protocol Port* |
|
|
Cluster Node Load Balancing Port |
|
|
| The ports marked with an asterisk (*) have property values that are blank by default in clockspring.properties. |
ZooKeeper
The following table lists the default ports used by an ZooKeeper and the corresponding property in the zookeeper.cfg file.
| Function | Property | Default Value |
|---|---|---|
ZooKeeper Client Connection Port |
|
2181 |
ZooKeeper Follower Connection Port |
|
2888 |
ZooKeeper Leader Election Connection Port |
|
3888 |
Configuration Best Practices
Typical Linux defaults are not necessarily well-tuned for the needs of an IO intensive application like Clockspring. These settings are changed by default through the configure-clockspring.sh script that is run as part of the install.
- Maximum File Handles
-
Clockspring will at any one time potentially have a very large number of file handles open. Increase the limits by editing /etc/security/limits.conf to add something like
* hard nofile 50000 * soft nofile 50000
- Maximum Forked Processes
-
Clockspring may be configured to generate a significant number of threads. To increase the allowable number, edit /etc/security/limits.conf
* hard nproc 10000 * soft nproc 10000
And your distribution may require an edit to /etc/security/limits.d/90-nproc.conf by adding
* soft nproc 10000
- Increase the number of TCP socket ports available
-
This is particularly important if your flow will be setting up and tearing down a large number of sockets in a small period of time.
sudo sysctl -w net.ipv4.ip_local_port_range="10000 65000"
- Set how long sockets stay in a TIMED_WAIT state when closed
-
You don’t want your sockets to sit and linger too long given that you want to be able to quickly setup and teardown new sockets. It is a good idea to read more about it and adjust to something like
sudo sysctl -w net.netfilter.nf_conntrack_tcp_timeout_time_wait="1"
- Disable swap
-
Leaving swap enabled may cause performance issues as RAM is written to and retreived from the disk. To configure Linux with no swap edit /etc/sysctl.conf to add the following line
vm.swappiness = 0
Recommended Antivirus Exclusions
Antivirus software can take a long time to scan large directories and the numerous files within them. Additionally, if the antivirus software locks files or directories during a scan, those resources are unavailable to Clockspring processes, causing latency or unavailability of these resources. To prevent these performance and reliability issues from occurring, it is highly recommended to configure your antivirus software to skip scans on the following directories:
-
content_repository -
flowfile_repository -
logs -
provenance_repository -
state
Logging Configuration
Clockspring uses logback as the runtime logging implementation. The conf directory contains a
standard logback.xml configuration with default appender and level settings. The
logback manual provides a complete reference of available options.
Standard Log Files
The standard logback configuration includes the following appender definitions and associated log files:
| File | Description |
|---|---|
|
Application log containing framework and component messages |
|
Deprecation log containing warnings for deprecated components and features |
|
HTTP request log containing user interface and REST API access messages |
|
User log containing authentication and authorization messages |
Deprecation Logging
The deprecation.log contains warning messages describing components and features that will be removed in
subsequent versions. Deprecation warnings should be evaluated and addressed to avoid breaking changes when upgrading to
a new major version. Resolving deprecation warnings involves upgrading to new components, changing component property
settings, or refactoring custom component classes.
Deprecation logging provides a method for checking compatibility before upgrading from one major release version to another. Upgrading to the latest minor release version will provide the most accurate set of deprecation warnings.
It is important to note that deprecation logging applies to both components and features. Logging for deprecated features requires a runtime reference to the property or method impacted. Disabled components with deprecated properties or methods will not generate deprecation logs. For this reason, it is important to exercise all configured components long enough to exercise standard flow behavior.
Deprecation logging can generate repeated messages depending on component configuration and usage patterns. Disabling
deprecation logging for a specific component class can be configured by adding a logger element to logback.xml.
The name attribute must start with deprecation, followed by the component class. Setting the level attribute to
OFF disables deprecation logging for the component specified.
<logger name="deprecation.org.apache.nifi.processors.ListenLegacyProtocol" level="OFF" />
Security Configuration
By default Clockspring will generate a self-signed SSL certificate and listen on port 8443. Please see the ssl-setup-guide.html page for instructions for most users. The below is preserved for users looking for a more advanced explanation of the necessary configurations.
This can be updated to use a CA-generated certificate by updating the following values in the clockspring.properties file:
| Property Name | Description |
|---|---|
|
File path to the key store containing the server private key and certificate entry. |
|
File path to |
|
File path to |
|
The type of key store. Supported types include |
|
The password for the key store. This property will be used as the key password when |
|
The password for the server private key entry in the key store. The |
|
File path to the trust store containing one or more certificates of trusted authorities for TLS connections. |
|
File path to |
|
The type of trust store. Supported types include |
|
The password for the trust store. |
Once the above properties have been configured, we can enable the User Interface to be accessed over HTTPS instead of HTTP. This is accomplished
by setting the nifi.web.https.host and nifi.web.https.port properties. The nifi.web.https.host property indicates which hostname the server
should run on. If it is desired that the HTTPS interface be accessible from all network interfaces, a value of 0.0.0.0 should be used. To allow
admins to configure the application to run only on specific network interfaces, nifi.web.http.network.interface* or nifi.web.https.network.interface*
properties can be specified.
It is important when enabling HTTPS that the nifi.web.http.port property be unset. Clockspring only supports running on HTTP or HTTPS, not both simultaneously.
|
Clockspring’s web server will REQUIRE certificate based client authentication for users accessing the User Interface when not configured with an alternative authentication mechanism which would require one way SSL (for instance LDAP, SAML, OpenID Connect, etc). Enabling an alternative authentication mechanism will configure the web server to WANT certificate base client authentication. This will allow it to support users with certificates and those without that may be logging in with credentials. See User Authentication for more details.
Now that the User Interface has been secured, we can easily secure Site-to-Site connections and inner-cluster communications, as well. This is
accomplished by setting the nifi.remote.input.secure and nifi.cluster.protocol.is.secure properties, respectively, to true. These communications
will always REQUIRE two way SSL as the nodes will use their configured keystore/truststore for authentication.
Automatic refreshing of Clockspring’s web SSL context factory can be enabled using the following properties:
| Property Name | Description |
|---|---|
|
Specifies whether the SSL context factory should be automatically reloaded if updates to the keystore and truststore are detected. By default, it is set to |
|
Specifies the interval at which the keystore and truststore are checked for updates. Only applies if |
Once the nifi.security.autoreload.enabled property is set to true, any valid changes to the configured keystore and truststore will cause the SSL context
factory to be reloaded, allowing clients to pick up the changes. This is intended to allow expired certificates to be updated in the keystore and new trusted
certificates to be added in the truststore, all without having to restart the service.
Changes to any of the nifi.security.keystore* or nifi.security.truststore* properties will not be picked up by the auto-refreshing logic, which assumes the passwords and store paths will remain the same.
|
TLS Cipher Suites
The Java Runtime Environment provides the ability to specify custom TLS cipher suites to be used by servers when accepting client connections. See here for more information. To enable this feature the following properties may be set:
| Property Name | Description |
|---|---|
|
Set of ciphers that are available to be used by incoming client connections. Replaces system defaults if set. |
|
Set of ciphers that must not be used by incoming client connections. Filters available ciphers if set. |
Each property should take the form of a comma-separated list of common cipher names as specified
here. Regular expressions
(for example ^.*GCM_SHA256$) may also be specified.
The semantics match the use of the following Jetty APIs:
User Authentication
Clockspring supports user authentication using a number of configurable protocols and strategies.
Username and password authentication is performed by a 'Login Identity Provider'. The Login Identity Provider is a pluggable mechanism for authenticating users via their username/password. Which Login Identity Provider to use is configured in the clockspring.properties file. Currently Clockspring offers username/password with Login Identity Providers options for Single User, Lightweight Directory Access Protocol (LDAP) and Kerberos.
The nifi.login.identity.provider.configuration.file property specifies the configuration file for Login Identity Providers. By default, this property is set to ./conf/login-identity-providers.xml.
The nifi.security.user.login.identity.provider property indicates which of the configured Login Identity Provider should be
used. The default value of this property is single-user-provider supporting authentication with a generated username and password.
For Single sign-on authentication, Clockspring will redirect users to the Identity Provider before returning to Clockspring. Clockspring will then process responses and convert attributes to application token information.
| Clockspring does not support running each multiple authentication providers concurrently. |
A user cannot anonymously authenticate with a secured instance of Clockspring unless nifi.security.allow.anonymous.authentication is set to true.
If this is the case, Clockspring must also be configured with an Authorizer that supports authorizing an anonymous user. Currently, Clockspring does not ship
with any Authorizers that support this.
There are three scenarios to consider when setting nifi.security.allow.anonymous.authentication. When the user is directly calling an endpoint
with no attempted authentication then nifi.security.allow.anonymous.authentication will control whether the request is authenticated or rejected.
The other two scenarios are when the request is proxied. This could either be proxied by a Clockspring node (e.g. a node in the Clockspring cluster) or by a separate
proxy that is proxying a request for an anonymous user. In these proxy scenarios nifi.security.allow.anonymous.authentication will control whether the
request is authenticated or rejected. In all three of these scenarios if the request is authenticated it will subsequently be subjected to normal
authorization based on the requested resource.
Single User
The default Single User Login Identity Provider supports automated generation of username and password credentials.
The default username is 'admin'. The generated password will be a random string consisting of 32 characters and stored using bcrypt hashing.
The default configuration in clockspring.properties enables Single User authentication:
nifi.security.user.login.identity.provider=single-user-provider
The default login-identity-providers.xml includes a blank provider definition:
<provider> <identifier>single-user-provider</identifier> <class>org.apache.nifi.authentication.single.user.SingleUserLoginIdentityProvider</class> <property name="Username"/> <property name="Password"/> </provider>
The following command can be used to change the Username and Password:
$ ./bin/clockspring.sh set-single-user-credentials
Lightweight Directory Access Protocol (LDAP)
Clockspring provides the ldap-setup-guide.html for the most common implementations of LDAP. The below is preserved for users looking for a more advanced explanation of the necessary configurations.
Set the following in clockspring.properties to enable LDAP username/password authentication:
nifi.security.user.login.identity.provider=ldap-provider
Modify login-identity-providers.xml to enable the ldap-provider. Here is the sample provided in the file:
<provider>
<identifier>ldap-provider</identifier>
<class>org.apache.nifi.ldap.LdapProvider</class>
<property name="Authentication Strategy">START_TLS</property>
<property name="Manager DN"></property>
<property name="Manager Password"></property>
<property name="TLS - Keystore"></property>
<property name="TLS - Keystore Password"></property>
<property name="TLS - Keystore Type"></property>
<property name="TLS - Truststore"></property>
<property name="TLS - Truststore Password"></property>
<property name="TLS - Truststore Type"></property>
<property name="TLS - Client Auth"></property>
<property name="TLS - Protocol"></property>
<property name="TLS - Shutdown Gracefully"></property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url"></property>
<property name="User Search Base"></property>
<property name="User Search Filter"></property>
<property name="Identity Strategy">USE_DN</property>
<property name="Authentication Expiration">12 hours</property>
</provider>
The ldap-provider has the following properties:
| Property Name | Description |
|---|---|
|
How the connection to the LDAP server is authenticated. Possible values are |
|
The DN of the manager that is used to bind to the LDAP server to search for users. |
|
The password of the manager that is used to bind to the LDAP server to search for users. |
|
Path to the Keystore that is used when connecting to LDAP using LDAPS or START_TLS. |
|
Password for the Keystore that is used when connecting to LDAP using LDAPS or START_TLS. |
|
Type of the Keystore that is used when connecting to LDAP using LDAPS or START_TLS (i.e. |
|
Path to the Truststore that is used when connecting to LDAP using LDAPS or START_TLS. |
|
Password for the Truststore that is used when connecting to LDAP using LDAPS or START_TLS. |
|
Type of the Truststore that is used when connecting to LDAP using LDAPS or START_TLS (i.e. |
|
Client authentication policy when connecting to LDAP using LDAPS or START_TLS. Possible values are |
|
Protocol to use when connecting to LDAP using LDAPS or START_TLS. (i.e. |
|
Specifies whether the TLS should be shut down gracefully before the target context is closed. Defaults to false. |
|
Strategy for handling referrals. Possible values are |
|
Duration of connect timeout. (i.e. |
|
Duration of read timeout. (i.e. |
|
Space-separated list of URLs of the LDAP servers (i.e. |
|
Base DN for searching for users (i.e. |
|
Filter for searching for users against the |
|
Strategy to identify users. Possible values are |
|
The duration of how long the user authentication is valid for. If the user never logs out, they will be required to log back in following this duration. |
| For changes to clockspring.properties and login-identity-providers.xml to take effect Clockspring must be restarted. If the environment is clustered, configuration files must be the same on all nodes. |
Kerberos
Below is an example and description of configuring a Login Identity Provider that integrates with a Kerberos Key Distribution Center (KDC) to authenticate users.
Set the following in clockspring.properties to enable Kerberos username/password authentication:
nifi.security.user.login.identity.provider=kerberos-provider
Modify login-identity-providers.xml to enable the kerberos-provider. Here is the sample provided in the file:
<provider>
<identifier>kerberos-provider</identifier>
<class>org.apache.nifi.kerberos.KerberosProvider</class>
<property name="Default Realm">NIFI.APACHE.ORG</property>
<property name="Authentication Expiration">12 hours</property>
</provider>
The kerberos-provider has the following properties:
| Property Name | Description |
|---|---|
|
Default realm to provide when user enters incomplete user principal (i.e. |
|
The duration of how long the user authentication is valid for. If the user never logs out, they will be required to log back in following this duration. |
See also [kerberos_service] to allow single sign-on access via client Kerberos tickets.
| For changes to clockspring.properties and login-identity-providers.xml to take effect, Clockspring needs to be restarted. If the environment is clustered, configuration files must be the same on all nodes. |
OpenID Connect
OpenID Connect integration provides single sign-on using a specified Authorization Server. The implementation supports the Authorization Code Grant Type as described in RFC 6749 Section 4.1 and OpenID Connect Core Section 3.1.1.
The Authorization Code Grant Type implementation supports RFC 7636 Proof
Key for Code Exchange as part of the authentication process. PKCE support uses the S256 code challenge method.
After successful authentication with the Authorization Server, Clockspring generates an application Bearer Token with an expiration based on the OAuth2 Access Token expiration. Clockspring stores authorized tokens using the local State Provider and encrypts serialized information using the application Sensitive Properties Key.
The implementation enables
OpenID Connect RP-Initiated Logout 1.0 when the
Authorization Server includes an end_session_endpoint element in the OpenID Discovery configuration.
OpenID Connect integration supports using Refresh Tokens as described in OpenID Connect Core Section 12. Clockspring tracks the expiration of the application Bearer Token and uses the stored Refresh Token to renew access prior to Bearer Token expiration, based on the configured token refresh window. Clockspring does not require OpenID Connect Providers to support Refresh Tokens. When an OpenID Connect Provider does not return a Refresh Token, Clockspring requires the user to initiate a new session when the application Bearer Token expires.
The Refresh Token implementation allows the Clockspring session to continue as long as the Refresh Token is valid and the user agent presents a valid Bearer Token. The default value for the token refresh window is 60 seconds. For an Access Token with an expiration of one hour, Clockspring will attempt to renew access using the Refresh Token when receiving an HTTP request 59 minutes after authenticating the Access Token. Revoked Refresh Tokens or expired application Bearer Tokens result in standard session timeout behavior, requiring the user to initiate a new session.
The OpenID Connect implementation supports OAuth 2.0 Token Revocation as defined in
RFC 7009. OpenID Connect Discovery configuration must include a
revocation_endpoint element that supports RFC 7009 standards. The application sends revocation requests for Refresh
Tokens when the authenticated Resource Owner initiates the logout process.
The implementation includes a scheduled process for removing and revoking expired Refresh Tokens when the corresponding Access Token has expired, indicating that the Resource Owner has terminated the application session. Scheduled session termination occurs when the user closes the browser without initiating the logout process. The scheduled process avoids extended storage of Refresh Tokens for users who are no longer interacting with the application.
The OpenID Connect implementation also supports the OAuth 2 Client Credentials Grant Type as described in
RFC 6749 Section 4.4. With OpenID Connect integration enabled,
Clockspring evaluates the JSON Web Token Issuer Claim named iss and delegates to either the configured Authorization Server
or internal processing for signature verification. When the iss claim value matches the issuer from the OpenID
Connect Discovery Configuration, Clockspring uses the JSON Web Keys from the Authorization Server for signature verification.
In all other cases, Clockspring verifies JSON Web Token signatures using an internal public key.
The Client Credentials Grant Type enables machine-to-machine authentication and requires token request processing outside
of Clockspring itself to obtain an Access Token. Clockspring must also be configured to authorize requests based on the identity
defined in a signed Access Token. Access Tokens obtained using the Client Credentials Grant Type do not include the
standard email, which requires configuring a fallback claim to identify the machine user. The most common claim for
identification is the Subject Claim named sub, which contains the Client ID.
OpenID Connect integration supports the following settings in clockspring.properties.
| Property Name | Description |
|---|---|
|
The Discovery Configuration URL
for the OpenID Connect Provider. Supports URLs with |
|
Socket Connect timeout when communicating with the OpenID Connect Provider. The default value is |
|
Socket Read timeout when communicating with the OpenID Connect Provider. The default value is |
|
The Client ID for Clockspring registered with the OpenID Connect Provider |
|
The Client Secret for Clockspring registered with the OpenID Connect Provider |
|
The preferred algorithm for validating identity tokens. If this value is blank, it will default to |
|
Comma separated scopes that are sent to OpenID Connect Provider in addition to |
|
Claim that identifies the authenticated user. The default value is |
|
Comma-separated list of possible fallback claims used to identify the user when the |
|
Name of the ID token claim that contains an array of group names of which the
user is a member. Application groups must be supplied from a User Group Provider with matching names in order for the
authorization process to use ID token claim groups. The default value is |
|
HTTPS Certificate Trust Store Strategy defines the source of certificate authorities that Clockspring uses when communicating with the OpenID Connect Provider.
The value of |
|
The Token Refresh Window specifies the amount of time before the Clockspring authorization session expires when the application will attempt to renew access using a cached Refresh Token. The default is |
OpenID Connect REST Resources
OpenID Connect authentication enables the following REST resources for integration with an OpenID Connect 1.0 Authorization Server:
| Resource Path | Description |
|---|---|
/nifi-api/access/oidc/callback/consumer |
Process OIDC 1.0 Login Authentication Responses from an Authentication Server. |
/nifi/logout-complete |
Path for redirect after successful OIDC RP-Initiated Logout 1.0 processing |
SAML
Clockspring provides the saml-setup-guide.html for the most common implementations of SAML. The below is preserved for users looking for a more advanced explanation of the necessary configurations.
To enable authentication via SAML the following properties must be configured in clockspring.properties.
Configuring a Metadata URL and an Entity Identifier enables Clockspring to act as a SAML 2.0 Relying Party, allowing users to authenticate using an account managed through a SAML 2.0 Asserting Party.
| Property Name | Description |
|---|---|
|
The URL for obtaining the identity provider’s metadata. The metadata can be retrieved from the identity provider via |
|
The entity id of the service provider. This value will be used as the |
|
The name of a SAML assertion attribute containing the user’sidentity. This property is optional and if not specified, or if the attribute is not found, then the NameID of the Subject will be used. |
|
The name of a SAML assertion attribute containing group names the user belongs to. This property is optional, but if populated the groups will be passed along to the authorization process. |
|
Controls the value of |
|
Controls the value of |
|
The algorithm to use when signing SAML messages. Reference the Open SAML Signature Constants for a list of valid values. If not specified, a default of SHA-256 will be used. The default value is |
|
The expiration of the JWT that will be produced from a successful SAML authentication response. The default value is |
|
Enables SAML SingleLogout which causes a logout from Clockspring to logout of the identity provider. By default, a logout of Clockspring will only remove the JWT. The default value is |
|
The truststore strategy when the IDP metadata URL begins with https. A value of |
|
The connection timeout when communicating with the SAML IDP. The default value is |
|
The read timeout when communicating with the SAML IDP. The default value is |
SAML REST Resources
SAML authentication enables the following REST API resources for integration with a SAML 2.0 Asserting Party:
| Resource Path | Description |
|---|---|
/nifi-api/access/saml/local-logout/request |
Complete SAML 2.0 Logout processing without communicating with the Asserting Party |
/nifi-api/access/saml/login/consumer |
Process SAML 2.0 Login Requests assertions using HTTP-POST or HTTP-REDIRECT binding |
/nifi-api/access/saml/metadata |
Retrieve SAML 2.0 entity descriptor metadata as XML |
/nifi-api/access/saml/single-logout/consumer |
Process SAML 2.0 Single Logout Request assertions using HTTP-POST or HTTP-REDIRECT binding. Requires Single Logout to be enabled. |
/nifi-api/access/saml/single-logout/request |
Complete SAML 2.0 Single Logout processing initiating a request to the Asserting Party. Requires Single Logout to be enabled. |
JSON Web Tokens
Clockspring uses JSON Web Tokens to provide authenticated access after the initial login process. Generated JSON Web Tokens include the authenticated user identity as well as the issuer and expiration from the configured Login Identity Provider.
Clockspring uses generated Ed25519 Key Pairs to support the EdDSA algorithm for JSON Web Signatures. The system stores Ed25519
Public Keys using the configured local State Provider and retains the Private Key in memory. This approach supports signature verification
for the expiration configured in the Login Identity Provider without persisting the private key.
JSON Web Token support includes revocation on logout using JSON Web Token Identifiers. The system denies access for expired tokens based on the Login Identity Provider configuration, but revocation invalidates the token prior to expiration. The system stores revoked identifiers using the configured local State Provider and runs a scheduled command to delete revoked identifiers after the associated expiration.
The following settings can be configured in clockspring.properties to control JSON Web Token signing.
| Property Name | Description |
|---|---|
|
JSON Web Signature Key Rotation Period defines how often the system generates a new RSA Key Pair, expressed as an ISO 8601 duration. The default is one hour: |
Authorization
X.509 Client Certificates
Clockspring supports authentication using mutual TLS with X.509 client certificates as part of the standard configuration when running with HTTPS enabled. Client certificate authentication is required for communication between Clockspring nodes in a clustered deployment and cannot be disabled.
Clockspring sends a certificate request during the TLS handshake as described in RFC 8446 Section 4.3.2 for TLS 1.3. When configured for authentication using a Login Identity Provider or Single Sign-On, Clockspring sends a certificate request but does not require the client to respond. In absence of other authentication strategies, Clockspring requires the client to present a certificate during the TLS handshake process. The Clockspring security trust store properties define the certificate authorities accepted as issuers of client certificates.
Proxied Entities Chain
Clockspring supports proxied entity access in conjunction with X.509 client certificate authentication. Clients that present trusted certificates for mutual TLS authentication can send proxied identity information through specified HTTP request headers. The client certificate subject principal must be authorized to send a proxy request, based on the configured Authorizer.
Authorized proxies can present one or more proxied identities using an HTTP request header and a value delimited using angle bracket characters.
-
Header Name:
X-ProxiedEntitiesChain -
Value:
<user-identity>
Multiple proxied entities can be specified to indicate a chain of proxy services.
-
Header Name:
X-ProxiedEntitiesChain -
Value:
<user-identity><proxy-server-identity>
Proxied identities that contain characters outside of US-ASCII must be encoded using Base64 and wrapped with additional angle brackets.
-
Header Name:
X-ProxiedEntitiesChain -
Value:
<<dXNlci1pZGVudGl0eQ>>
Clockspring includes an HTTP response header on successful authentication of HTTP requests with proxied entities.
-
Header Name:
X-ProxiedEntitiesAccepted -
Value:
true
Clockspring includes an HTTP response header on failed authentication of proxied entities describing the error.
-
Header Name:
X-ProxiedEntitiesDetails -
Value: error message
Proxied Entity Groups
Clockspring supports passing group membership information together with proxied identity information from clients that present authorized X.509 client certificates.
Authorized proxies can pass one or more group names using an HTTP request header and values delimited using angle bracket characters.
-
Header Name:
X-ProxiedEntityGroups -
Value:
<first-group><second-group>
Proxied group names follow the same encoding standards as proxied entities, requiring Base64 encoding for characters outside of US-ASCII.
Cross-Site Request Forgery Protection
Clockspring uses Cross-Site Request Forgery protection as part of user interface access based on session cookies. CSRF protection builds on standard Spring Security features and implements the double submit cookie strategy. The implementation strategy relies on the server generating and sending a random request token cookie at the beginning of the session. The client browser stores the cookie, JavaScript application code reads the cookie, and sets the value in a custom HTTP header on subsequent requests.
Clockspring applies the SameSite attribute with a value of Strict to session cookies, which instructs supporting web
browsers to avoid sending the cookie on requests that a third party initiates. These protections mitigate a number of
potential threats.
Cookie names are not considered part of the public REST API and are subject to change in minor release
versions. Programmatic HTTP requests to the Clockspring REST API should use the standard HTTP Authorization header when
sending access tokens instead of the session cookie that the Clockspring user interface uses.
Clockspring deployments that include HTTP load balanced access with Session Affinity depend on custom HTTP cookies, requiring custom programmatic clients to store and send cookies for the duration of an authenticated session. Programmatic clients in these scenarios should limit cookie storage to cookie names specific to the HTTP load balancer to avoid HTTP 403 Forbidden errors related to CSRF filtering.
The CSRF implementation sends the following HTTP cookie to set the random request token for the session:
-
Cookie Name:
__Secure-Request-Token -
Value: Random UUID
The CSRF security filter expects the following HTTP request header on non-idempotent methods such as POST or PUT:
-
Header Name:
Request-Token -
Value: UUID matching the
__Secure-Request-Tokencookie header
Multi-Tenant Authorization
After you have configured Clockspring to run securely and with an authentication mechanism, you must configure who has access to the system, and the level of their access. You can do this using 'multi-tenant authorization'. Multi-tenant authorization enables multiple groups of users (tenants) to command, control, and observe different parts of the dataflow, with varying levels of authorization. When an authenticated user attempts to view or modify a Clockspring resource, the system checks whether the user has privileges to perform that action. These privileges are defined by policies that you can apply system-wide or to individual components.
Authorizer Configuration
An 'authorizer' grants users the privileges to manage users and policies by creating preliminary authorizations at startup.
Authorizers are configured using two properties in the clockspring.properties file:
-
The
nifi.authorizer.configuration.fileproperty specifies the configuration file where authorizers are defined. By default, the authorizers.xml file located in the root installation conf directory is selected. -
The
nifi.security.user.authorizerproperty indicates which of the configured authorizers in the authorizers.xml file to use.
Authorizers.xml Setup
The authorizers.xml file is used to define and configure available authorizers. The default authorizer is the StandardManagedAuthorizer. The managed authorizer is comprised of a UserGroupProvider and a AccessPolicyProvider. The users, group, and access policies will be loaded and optionally configured through these providers. The managed authorizer will make all access decisions based on these provided users, groups, and access policies.
During startup there is a check to ensure that there are no two users/groups with the same identity/name. This check is executed regardless of the configured implementation. This is necessary because this is how users/groups are identified and authorized during access decisions.
FileUserGroupProvider
The default UserGroupProvider is the FileUserGroupProvider, however, you can develop additional UserGroupProviders as extensions. The FileUserGroupProvider has the following properties:
-
Users File - The file where the FileUserGroupProvider stores users and groups. By default, the users.xml in the
confdirectory is chosen. -
Legacy Authorized Users File - The full path to an existing authorized-users.xml that will be automatically be used to load the users and groups into the Users File.
-
Initial User Identity - The identity of a users and systems to seed the Users File. The name of each property must be unique, for example: "Initial User Identity A", "Initial User Identity B", "Initial User Identity C" or "Initial User Identity 1", "Initial User Identity 2", "Initial User Identity 3"
LdapUserGroupProvider
Another option for the UserGroupProvider is the LdapUserGroupProvider. By default, this option is commented out but can be configured in lieu of the FileUserGroupProvider. This will sync users and groups from a directory server and will present them in the UI in read only form.
The LdapUserGroupProvider has the following properties:
| Property Name | Description |
|---|---|
|
How the connection to the LDAP server is authenticated. Possible values are |
|
The DN of the manager that is used to bind to the LDAP server to search for users. |
|
The password of the manager that is used to bind to the LDAP server to search for users. |
|
Path to the Keystore that is used when connecting to LDAP using LDAPS or START_TLS. |
|
Password for the Keystore that is used when connecting to LDAP using LDAPS or START_TLS. |
|
Type of the Keystore that is used when connecting to LDAP using LDAPS or START_TLS (i.e. |
|
Path to the Truststore that is used when connecting to LDAP using LDAPS or START_TLS. |
|
Password for the Truststore that is used when connecting to LDAP using LDAPS or START_TLS. |
|
Type of the Truststore that is used when connecting to LDAP using LDAPS or START_TLS (i.e. |
|
Client authentication policy when connecting to LDAP using LDAPS or START_TLS. Possible values are |
|
Protocol to use when connecting to LDAP using LDAPS or START_TLS. (i.e. |
|
Specifies whether the TLS should be shut down gracefully before the target context is closed. Defaults to false. |
|
Strategy for handling referrals. Possible values are |
|
Duration of connect timeout. (i.e. |
|
Duration of read timeout. (i.e. |
|
Space-separated list of URLs of the LDAP servers (i.e. |
|
Sets the page size when retrieving users and groups. If not specified, no paging is performed. |
|
Sets whether group membership decisions are case sensitive. When a user or group is inferred (by not specifying or user or group search base or user identity attribute or group name attribute) case sensitivity is enforced since the value to use for the user identity or group name would be ambiguous. Defaults to false. |
|
Duration of time between syncing users and groups. (i.e. |
|
Base DN for searching for users (i.e. |
|
Object class for identifying users (i.e. |
|
Search scope for searching users ( |
|
Filter for searching for users against the |
|
Attribute to use to extract user identity (i.e. |
|
Attribute to use to define group membership (i.e. |
|
If blank, the value of the attribute defined in |
|
Base DN for searching for groups (i.e. |
|
Object class for identifying groups (i.e. |
|
Search scope for searching groups ( |
|
Filter for searching for groups against the |
|
Attribute to use to extract group name (i.e. |
|
Attribute to use to define group membership (i.e. |
|
If blank, the value of the attribute defined in |
| Any identity mapping rules specified in clockspring.properties will also be applied to the user identities. Group names are not mapped. |
AzureGraphUserGroupProvider
The AzureGraphUserGroupProvider fetches users and groups from Azure Active Directory (AAD) using the Microsoft Graph API.
A subset of groups are fetched based on filter conditions (Group Filter Prefix, Group Filter Suffix, Group Filter Substring, and Group Filter List Inclusion) evaluated against the displayName property of the Azure AD group. Member users are then loaded from these groups. At least one filter condition should be specified.
This provider requires an Azure app registration with:
-
Microsoft Graph Group.Read.All and User.Read.All API permissions with admin consent
-
A client secret or application password
-
ID token claims for upn and/or email
The AzureGraphUserGroupProvider has the following properties:
| Property Name | Description |
|---|---|
|
Duration of delay between each user and group refresh. Default is |
|
The endpoint of the Azure AD login. This can be found in the Azure portal under Azure Active Directory → App registrations → [application name] → Endpoints. For example, the global authority endpoint is |
|
The endpoint of the Azure Graph API, with the version identifier attached. The base url can be found in the Azure portal under Azure Active Directory → App registrations → [application name] → Endpoints. For example, the global graph endpoint is |
|
The url for the Graph api scope. See https://learn.microsoft.com/en-us/azure/active-directory/develop/scopes-oidc for an explanation of scopes. This usually only needs to be changed if you are connecting to a different |
|
Tenant ID or Directory ID of the Azure AD tenant. This can be found in the Azure portal under Azure Active Directory → App registrations → [application name] → Directory (tenant) ID. |
|
Client ID or Application ID of the Azure app registration. This can be found in the Azure portal under Azure Active Directory → App registrations → [application name] → Overview → Application (client) ID. |
|
A client secret from the Azure app registration. Secrets can be created in the Azure portal under Azure Active Directory → App registrations → [application name] → Certificates & secrets → Client secrets → [+] New client secret. |
|
Prefix filter for Azure AD groups. Matches against the group displayName to retrieve only groups with names starting with the provided prefix. |
|
Suffix filter for Azure AD groups. Matches against the group displayName to retrieve only groups with names ending with the provided suffix. |
|
Substring filter for Azure AD groups. Matches against the group displayName to retrieve only groups with names containing the provided substring. |
|
Comma-separated list of Azure AD groups. If no string-based matching filter (i.e., prefix, suffix, and substring) is specified, set this property to avoid fetching all groups and users in the Azure AD tenant. |
|
Page size to use with the Microsoft Graph API. Set to 0 to disable paging API calls. Default: 50, Max: 999. |
|
The property of the user directory object mapped to the user name field. Default is 'upn'. 'email' is another option when |
Like LdapUserGroupProvider, the AzureGraphUserGroupProvider configuration is commented out in the authorizers.xml file. Refer to the comment for a starter configuration.
Composite Implementations
Another option for the UserGroupProvider are composite implementations. This means that multiple sources/implementations can be configured and composed. For instance, an admin can configure users/groups to be loaded from a file and a directory server. There are two composite implementations, one that supports multiple UserGroupProviders and one that supports multiple UserGroupProviders and a single configurable UserGroupProvider.
The CompositeUserGroupProvider will provide support for retrieving users and groups from multiple sources. The CompositeUserGroupProvider has the following property:
| Property Name | Description |
|---|---|
|
The identifier of user group providers to load from. The name of each property must be unique, for example: "User Group Provider A", "User Group Provider B", "User Group Provider C" or "User Group Provider 1", "User Group Provider 2", "User Group Provider 3" |
| Any identity mapping rules specified in clockspring.properties are not applied in this implementation. This behavior would need to be applied by the base implementation. |
The CompositeConfigurableUserGroupProvider will provide support for retrieving users and groups from multiple sources. Additionally, a single configurable user group provider is required. Users from the configurable user group provider are configurable, however users loaded from one of the User Group Provider [unique key] will not be. The CompositeConfigurableUserGroupProvider has the following properties:
| Property Name | Description |
|---|---|
|
A configurable user group provider. |
|
The identifier of user group providers to load from. The name of each property must be unique, for example: "User Group Provider A", "User Group Provider B", "User Group Provider C" or "User Group Provider 1", "User Group Provider 2", "User Group Provider 3" |
FileAccessPolicyProvider
The default AccessPolicyProvider is the FileAccessPolicyProvider, however, you can develop additional AccessPolicyProvider as extensions. The FileAccessPolicyProvider has the following properties:
| Property Name | Description |
|---|---|
|
The identifier for an User Group Provider defined above that will be used to access users and groups for use in the managed access policies. |
|
The file where the FileAccessPolicyProvider will store policies. |
|
The identity of an initial admin user that will be granted access to the UI and given the ability to create additional users, groups, and policies. The value of this property could be a DN when using certificates or LDAP, or a Kerberos principal. This property will only be used when there are no other policies defined. If this property is specified then a Legacy Authorized Users File can not be specified. |
|
The full path to an existing authorized-users.xml that will be automatically converted to the new authorizations model. If this property is specified then an Initial Admin Identity can not be specified, and this property will only be used when there are no other users, groups, and policies defined. |
|
The identity of a cluster node. When clustered, a property for each node should be defined, so that every node knows about every other node. If not clustered these properties can be ignored. The name of each property must be unique, for example for a three node cluster: "Node Identity A", "Node Identity B", "Node Identity C" or "Node Identity 1", "Node Identity 2", "Node Identity 3" |
|
The name of a group containing cluster nodes. The typical use for this is when nodes are dynamically added/removed from the cluster. |
| The identities configured in the Initial Admin Identity, the Node Identity properties, or discovered in a Legacy Authorized Users File must be available in the configured User Group Provider. |
| Any users in the legacy users file must be found in the configured User Group Provider. |
| Any identity mapping rules specified in clockspring.properties will also be applied to the node identities, so the values should be the unmapped identities (i.e. full DN from a certificate). This identity must be found in the configured User Group Provider. |
StandardManagedAuthorizer
The default authorizer is the StandardManagedAuthorizer, however, you can develop additional authorizers as extensions. The StandardManagedAuthorizer has the following property:
| Property Name | Description |
|---|---|
|
The identifier for an Access Policy Provider defined above. |
Initial Admin Identity (New Instance)
If you are setting up a secured instance for the first time, you must manually designate an 'Initial Admin Identity' in the authorizers.xml file. This initial admin user is granted access to the UI and given the ability to create additional users, groups, and policies. The value of this property could be a DN (when using certificates or LDAP) or a Kerberos principal. If you are the administrator, add yourself as the 'Initial Admin Identity'.
After you have edited and saved the authorizers.xml file, restart Clockspring. The 'Initial Admin Identity' user and administrative policies are added to the users.xml and authorizations.xml files during restart. Once Clockspring starts, the 'Initial Admin Identity' user is able to access the UI and begin managing users, groups, and policies.
| For a brand new secure flow, providing the "Initial Admin Identity" gives that user access to get into the UI and to manage users, groups and policies. If that user wants to start modifying the flow they need to grant themselves policies for the root process group. The system is unable to do this automatically because in a new flow the UUID of the root process group is not permanent until the flow.xml.gz is generated. If the instance is an upgrade from an existing flow.xml.gz the "Initial Admin Identity" user is automatically given the privileges to modify the flow. |
Some common use cases are described below.
File-based (LDAP Authentication)
Here is an example LDAP entry using the name John Smith:
<authorizers>
<userGroupProvider>
<identifier>file-user-group-provider</identifier>
<class>org.apache.nifi.authorization.FileUserGroupProvider</class>
<property name="Users File">./conf/users.xml</property>
<property name="Legacy Authorized Users File"></property>
<property name="Initial User Identity 1">cn=John Smith,ou=people,dc=example,dc=com</property>
</userGroupProvider>
<accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">file-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">cn=John Smith,ou=people,dc=example,dc=com</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1"></property>
</accessPolicyProvider>
<authorizer>
<identifier>managed-authorizer</identifier>
<class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
<property name="Access Policy Provider">file-access-policy-provider</property>
</authorizer>
</authorizers>
File-based (Kerberos Authentication)
Here is an example Kerberos entry using the name John Smith and realm NIFI.APACHE.ORG:
<authorizers>
<userGroupProvider>
<identifier>file-user-group-provider</identifier>
<class>org.apache.nifi.authorization.FileUserGroupProvider</class>
<property name="Users File">./conf/users.xml</property>
<property name="Legacy Authorized Users File"></property>
<property name="Initial User Identity 1">johnsmith@NIFI.APACHE.ORG</property>
</userGroupProvider>
<accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">file-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">johnsmith@NIFI.APACHE.ORG</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1"></property>
</accessPolicyProvider>
<authorizer>
<identifier>managed-authorizer</identifier>
<class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
<property name="Access Policy Provider">file-access-policy-provider</property>
</authorizer>
</authorizers>
LDAP-based Users/Groups Referencing User DN
Here is an example loading users and groups from LDAP. Group membership will be driven through the member attribute of each group. Authorization will still use file-based access policies:
dn: cn=User 1,ou=users,o=clockspring
objectClass: organizationalPerson
objectClass: person
objectClass: inetOrgPerson
objectClass: top
cn: User 1
sn: User1
uid: user1
dn: cn=User 2,ou=users,o=clockspring
objectClass: organizationalPerson
objectClass: person
objectClass: inetOrgPerson
objectClass: top
cn: User 2
sn: User2
uid: user2
dn: cn=admins,ou=groups,o=clockspring
objectClass: groupOfNames
objectClass: top
cn: admins
member: cn=User 1,ou=users,o=clockspring
member: cn=User 2,ou=users,o=clockspring
<authorizers>
<userGroupProvider>
<identifier>ldap-user-group-provider</identifier>
<class>org.apache.nifi.ldap.tenants.LdapUserGroupProvider</class>
<property name="Authentication Strategy">ANONYMOUS</property>
<property name="Manager DN"></property>
<property name="Manager Password"></property>
<property name="TLS - Keystore"></property>
<property name="TLS - Keystore Password"></property>
<property name="TLS - Keystore Type"></property>
<property name="TLS - Truststore"></property>
<property name="TLS - Truststore Password"></property>
<property name="TLS - Truststore Type"></property>
<property name="TLS - Client Auth"></property>
<property name="TLS - Protocol"></property>
<property name="TLS - Shutdown Gracefully"></property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldap://localhost:10389</property>
<property name="Page Size"></property>
<property name="Sync Interval">30 mins</property>
<property name="Group Membership - Enforce Case Sensitivity">false</property>
<property name="User Search Base">ou=users,o=clockspring</property>
<property name="User Object Class">person</property>
<property name="User Search Scope">ONE_LEVEL</property>
<property name="User Search Filter"></property>
<property name="User Identity Attribute">cn</property>
<property name="User Group Name Attribute"></property>
<property name="User Group Name Attribute - Referenced Group Attribute"></property>
<property name="Group Search Base">ou=groups,o=clockspring</property>
<property name="Group Object Class">groupOfNames</property>
<property name="Group Search Scope">ONE_LEVEL</property>
<property name="Group Search Filter"></property>
<property name="Group Name Attribute">cn</property>
<property name="Group Member Attribute">member</property>
<property name="Group Member Attribute - Referenced User Attribute"></property>
</userGroupProvider>
<accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">ldap-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">John Smith</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1"></property>
</accessPolicyProvider>
<authorizer>
<identifier>managed-authorizer</identifier>
<class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
<property name="Access Policy Provider">file-access-policy-provider</property>
</authorizer>
</authorizers>
The Initial Admin Identity value would have loaded from the cn from John Smith’s entry based on the User Identity Attribute value.
LDAP-based Users/Groups Referencing User Attribute
Here is an example loading users and groups from LDAP. Group membership will be driven through the member uid attribute of each group. Authorization will still use file-based access policies:
dn: uid=User 1,ou=Users,dc=local
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
uid: user1
cn: User 1
dn: uid=User 2,ou=Users,dc=local
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
uid: user2
cn: User 2
dn: cn=Managers,ou=Groups,dc=local
objectClass: posixGroup
cn: Managers
memberUid: user1
memberUid: user2
<authorizers>
<userGroupProvider>
<identifier>ldap-user-group-provider</identifier>
<class>org.apache.nifi.ldap.tenants.LdapUserGroupProvider</class>
<property name="Authentication Strategy">ANONYMOUS</property>
<property name="Manager DN"></property>
<property name="Manager Password"></property>
<property name="TLS - Keystore"></property>
<property name="TLS - Keystore Password"></property>
<property name="TLS - Keystore Type"></property>
<property name="TLS - Truststore"></property>
<property name="TLS - Truststore Password"></property>
<property name="TLS - Truststore Type"></property>
<property name="TLS - Client Auth"></property>
<property name="TLS - Protocol"></property>
<property name="TLS - Shutdown Gracefully"></property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldap://localhost:10389</property>
<property name="Page Size"></property>
<property name="Sync Interval">30 mins</property>
<property name="Group Membership - Enforce Case Sensitivity">false</property>
<property name="User Search Base">ou=Users,dc=local</property>
<property name="User Object Class">posixAccount</property>
<property name="User Search Scope">ONE_LEVEL</property>
<property name="User Search Filter"></property>
<property name="User Identity Attribute">cn</property>
<property name="User Group Name Attribute"></property>
<property name="User Group Name Attribute - Referenced Group Attribute"></property>
<property name="Group Search Base">ou=Groups,dc=local</property>
<property name="Group Object Class">posixGroup</property>
<property name="Group Search Scope">ONE_LEVEL</property>
<property name="Group Search Filter"></property>
<property name="Group Name Attribute">cn</property>
<property name="Group Member Attribute">memberUid</property>
<property name="Group Member Attribute - Referenced User Attribute">uid</property>
</userGroupProvider>
<accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">ldap-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">John Smith</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1"></property>
</accessPolicyProvider>
<authorizer>
<identifier>managed-authorizer</identifier>
<class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
<property name="Access Policy Provider">file-access-policy-provider</property>
</authorizer>
</authorizers>
Composite - File and LDAP-based Users/Groups
Here is an example composite implementation loading users and groups from LDAP and a local file. Group membership will be driven through the member attribute of each group. The users from LDAP will be read only while the users loaded from the file will be configurable in UI.
dn: cn=User 1,ou=users,o=clockspring
objectClass: organizationalPerson
objectClass: person
objectClass: inetOrgPerson
objectClass: top
cn: User 1
sn: User1
uid: user1
dn: cn=User 2,ou=users,o=clockspring
objectClass: organizationalPerson
objectClass: person
objectClass: inetOrgPerson
objectClass: top
cn: User 2
sn: User2
uid: user2
dn: cn=admins,ou=groups,o=clockspring
objectClass: groupOfNames
objectClass: top
cn: admins
member: cn=User 1,ou=users,o=clockspring
member: cn=User 2,ou=users,o=clockspring
<authorizers>
<userGroupProvider>
<identifier>file-user-group-provider</identifier>
<class>org.apache.nifi.authorization.FileUserGroupProvider</class>
<property name="Users File">./conf/users.xml</property>
<property name="Legacy Authorized Users File"></property>
<property name="Initial User Identity 1">cn=clockspring-node1,ou=servers,dc=example,dc=com</property>
<property name="Initial User Identity 2">cn=clockspring-node2,ou=servers,dc=example,dc=com</property>
</userGroupProvider>
<userGroupProvider>
<identifier>ldap-user-group-provider</identifier>
<class>org.apache.nifi.ldap.tenants.LdapUserGroupProvider</class>
<property name="Authentication Strategy">ANONYMOUS</property>
<property name="Manager DN"></property>
<property name="Manager Password"></property>
<property name="TLS - Keystore"></property>
<property name="TLS - Keystore Password"></property>
<property name="TLS - Keystore Type"></property>
<property name="TLS - Truststore"></property>
<property name="TLS - Truststore Password"></property>
<property name="TLS - Truststore Type"></property>
<property name="TLS - Client Auth"></property>
<property name="TLS - Protocol"></property>
<property name="TLS - Shutdown Gracefully"></property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldap://localhost:10389</property>
<property name="Page Size"></property>
<property name="Sync Interval">30 mins</property>
<property name="Group Membership - Enforce Case Sensitivity">false</property>
<property name="User Search Base">ou=users,o=clockspring</property>
<property name="User Object Class">person</property>
<property name="User Search Scope">ONE_LEVEL</property>
<property name="User Search Filter"></property>
<property name="User Identity Attribute">cn</property>
<property name="User Group Name Attribute"></property>
<property name="User Group Name Attribute - Referenced Group Attribute"></property>
<property name="Group Search Base">ou=groups,o=clockspring</property>
<property name="Group Object Class">groupOfNames</property>
<property name="Group Search Scope">ONE_LEVEL</property>
<property name="Group Search Filter"></property>
<property name="Group Name Attribute">cn</property>
<property name="Group Member Attribute">member</property>
<property name="Group Member Attribute - Referenced User Attribute"></property>
</userGroupProvider>
<userGroupProvider>
<identifier>composite-user-group-provider</identifier>
<class>org.apache.nifi.authorization.CompositeConfigurableUserGroupProvider</class>
<property name="Configurable User Group Provider">file-user-group-provider</property>
<property name="User Group Provider 1">ldap-user-group-provider</property>
</userGroupProvider>
<accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">composite-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">John Smith</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1">cn=clockspring-node1,ou=servers,dc=example,dc=com</property>
<property name="Node Identity 2">cn=clockspring-node2,ou=servers,dc=example,dc=com</property>
</accessPolicyProvider>
<authorizer>
<identifier>managed-authorizer</identifier>
<class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
<property name="Access Policy Provider">file-access-policy-provider</property>
</authorizer>
</authorizers>
In this example, the users and groups are loaded from LDAP but the servers are managed in a local file. The Initial Admin Identity value came from an attribute in a LDAP entry based on the User Identity Attribute. The Node Identity values are established in the local file using the Initial User Identity properties.
| Do not manually edit the authorizations.xml file. Create authorizations only during initial setup. |
Cluster Node Identities
If you are running a clustered environment you must specify the identities for each node. The authorization policies required for the nodes to communicate are created during startup.
For example, if you are setting up a 2 node cluster with the following DNs for each node:
cn=clockspring-1,ou=people,dc=example,dc=com cn=clockspring-2,ou=people,dc=example,dc=com
<authorizers>
<userGroupProvider>
<identifier>file-user-group-provider</identifier>
<class>org.apache.nifi.authorization.FileUserGroupProvider</class>
<property name="Users File">./conf/users.xml</property>
<property name="Legacy Authorized Users File"></property>
<property name="Initial User Identity 1">johnsmith@clockspring.net</property>
<property name="Initial User Identity 2">cn=clockspring-1,ou=people,dc=example,dc=com</property>
<property name="Initial User Identity 3">cn=clockspring-2,ou=people,dc=example,dc=com</property>
</userGroupProvider>
<accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">file-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">johnsmith@NIFI.APACHE.ORG</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1">cn=clockspring-1,ou=people,dc=example,dc=com</property>
<property name="Node Identity 2">cn=clockspring-2,ou=people,dc=example,dc=com</property>
</accessPolicyProvider>
<authorizer>
<identifier>managed-authorizer</identifier>
<class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
<property name="Access Policy Provider">file-access-policy-provider</property>
</authorizer>
</authorizers>
| In a cluster, all nodes must have the same authorizations.xml and users.xml. The only exception is if a node has empty authorizations.xml and user.xml files prior to joining the cluster. In this scenario, the node inherits them from the cluster during startup. |
Now that initial authorizations have been created, additional users, groups and authorizations can be created and managed in the UI.
Configuring Users & Access Policies
Depending on the capabilities of the configured UserGroupProvider and AccessPolicyProvider the users, groups, and policies will be configurable in the UI. If the extensions are not configurable the users, groups, and policies will read-only in the UI. If the configured authorizer does not use UserGroupProvider and AccessPolicyProvider the users and policies may or may not be visible and configurable in the UI based on the underlying implementation.
This section assumes the users, groups, and policies are configurable in the UI and describes:
-
How to create users and groups
-
How access policies are used to define authorizations
-
How to view policies that are set on a user
-
How to configure access policies by walking through specific examples
| Instructions requiring interaction with the UI assume the application is being accessed by User1, a user with administrator privileges, such as the 'Initial Admin Identity' user or a converted legacy admin user (see Authorizers.xml Setup). |
Creating Users and Groups
From the UI, select 'Users' from the Global Menu. This opens a dialog to create and manage users and groups.

Click the Add icon (
). To create a user, enter the 'Identity' information relevant to the authentication method chosen to secure
your Clockspring instance. Click OK.

To create a group, select the 'Group' radio button, enter the name of the group and select the users to be included in the group. Click OK.

Access Policies
You can manage the ability for users and groups to view or modify resources using 'access policies'. There are two types of access policies that can be applied to a resource:
-
View — If a view policy is created for a resource, only the users or groups that are added to that policy are able to see the details of that resource.
-
Modify — If a resource has a modify policy, only the users or groups that are added to that policy can change the configuration of that resource.
You can create and apply access policies on both global and component levels.
Global Access Policies
Global access policies govern the following system level authorizations:
| Policy | Privilege | Global Menu Selection | Resource Descriptor |
|---|---|---|---|
view the UI |
Allows users to view the UI |
N/A |
|
access the controller |
Allows users to view/modify the controller including Management Controller Services, Reporting Tasks, Registry Clients, Parameter Providers and nodes in the cluster |
Controller Settings |
|
access parameter contexts |
Allows users to view/modify Parameter Contexts. Access to Parameter Contexts are inherited from the "access the controller" policies unless overridden. |
Parameter Contexts |
|
query provenance |
Allows users to submit a Provenance Search and request Event Lineage |
Data Provenance |
|
access restricted components |
Allows users to create/modify restricted components assuming other permissions are sufficient. The restricted components may indicate which specific permissions are required. Permissions can be granted for specific restrictions or be granted regardless of restrictions. If permission is granted regardless of restrictions, the user can create/modify all restricted components. |
N/A |
|
access all policies |
Allows users to view/modify the policies for all components |
Policies |
|
access users/user groups |
Allows users to view/modify the users and user groups |
Users |
|
retrieve site-to-site details |
Allows other instances to retrieve Site-To-Site details |
N/A |
|
view system diagnostics |
Allows users to view System Diagnostics |
Summary |
|
proxy user requests |
Allows proxy machines to send requests on the behalf of others |
N/A |
|
access counters |
Allows users to view/modify Counters |
Counters |
|
Component Level Access Policies
Component level access policies govern the following component level authorizations:
| Policy | Privilege | Resource Descriptor & Action |
|---|---|---|
view the component |
Allows users to view component configuration details |
|
modify the component |
Allows users to modify component configuration details |
|
operate the component |
Allows users to operate components by changing component run status (start/stop/enable/disable), remote port transmission status, or terminating processor threads |
|
view provenance |
Allows users to view provenance events generated by this component |
|
view the data |
Allows users to view metadata and content for this component in FlowFile queues in outbound connections and through provenance events |
|
modify the data |
Allows users to empty FlowFile queues in outbound connections and submit replays through provenance events |
|
view the policies |
Allows users to view the list of users who can view/modify a component |
|
modify the policies |
Allows users to modify the list of users who can view/modify a component |
|
receive data via site-to-site |
Allows a port to receive data from other instances |
|
send data via site-to-site |
Allows a port to send data to other instances |
|
| You can apply access policies to all component types except connections. Connection authorizations are inferred by the individual access policies on the source and destination components of the connection, as well as the access policy of the process group containing the components. This is discussed in more detail in the Creating a Connection and Editing a Connection examples below. |
| In order to access List Queue or Delete Queue for a connection, a user requires permission to the "view the data" and "modify the data" policies on the component. In a clustered environment, all nodes must be be added to these policies as well, as a user request could be replicated through any node in the cluster. |
Access Policy Inheritance
An administrator does not need to manually create policies for every component in the dataflow. To reduce the amount of time admins spend on authorization management, policies are inherited from parent resource to child resource. For example, if a user is given access to view and modify a process group, that user can also view and modify the components in the process group. Policy inheritance enables an administrator to assign policies at one time and have the policies apply throughout the entire dataflow.
You can override an inherited policy (as described in the Moving a Processor example below). Overriding a policy removes the inherited policy, breaking the chain of inheritance from parent to child, and creates a replacement policy to add users as desired. Inherited policies and their users can be restored by deleting the replacement policy.
| 'View the policies' and 'Modify the Policies' component-level access policies are an exception to this inherited behavior. When a user is added to either policy, they are added to the current list of administrators. They do not override higher level administrators. For this reason, only component specific administrators are displayed for the 'View the policies' and 'Modify the policies' access policies. |
| You cannot modify the users/groups on an inherited policy. Users and groups can only be added or removed from a parent policy or an override policy. |
Viewing Policies on Users
From the UI, select 'Users' from the Global Menu. This opens the Users dialog.

Select the View User Policies icon (
).

The User Policies window displays the global and component level policies that have been set for the chosen user. Select the Go To icon (
) to navigate to that component in the canvas.
Access Policy Configuration Examples
The most effective way to understand how to create and apply access policies is to walk through some common examples. The following scenarios assume User1 is an administrator and User2 is a newly added user that has only been given access to the UI.
Let’s begin with two processors on the canvas as our starting point: GenerateFlowFile and LogAttribute.

User1 can add components to the dataflow and is able to move, edit and connect all processors. The details and properties of the root process group and processors are visible to User1.

User1 wants to maintain their current privileges to the dataflow and its components.
User2 is unable to add components to the dataflow or move, edit, or connect components. The details and properties of the root process group and processors are hidden from User2.

Moving a Processor
To allow User2 to move the GenerateFlowFile processor in the dataflow and only that processor, User1 performs the following steps:
-
Select the GenerateFlowFile processor so that it is highlighted.
-
Select the Access Policies icon (
) from the Operate palette and the Access Policies dialog opens. -
Select 'modify the component' from the policy drop-down. The 'modify the component' policy that currently exists on the processor (child) is the 'modify the component' policy inherited from the root process group (parent) on which User1 has privileges.
-
Select the Override link in the policy inheritance message. When creating the replacement policy, you are given a choice to override with a copy of the inherited policy or an empty policy. Select the Override button to create a copy.
-
On the replacement policy that is created, select the Add User icon (
). Find or enter User2 in the User Identity field and select OK. With these changes, User1 maintains the ability to move both processors on the canvas. User2 can now move the GenerateFlowFile processor but cannot move the LogAttribute processor.
Editing a Processor
In the Moving a Processor example above, User2 was added to the 'modify the component' policy for GenerateFlowFile. Without the ability to view the processor properties, User2 is unable to modify the processor’s configuration. In order to edit a component, a user must be on both the 'view the component' and 'modify the component' policies. To implement this, User1 performs the following steps:
-
Select the GenerateFlowFile processor.
-
Select the Access Policies icon (
) from the Operate palette and the Access Policies dialog opens. -
Select 'view the component' from the policy drop-down. The 'view the component' policy that currently exists on the processor (child) is the 'view the component' policy inherited from the root process group (parent) on which User1 has privileges.
-
Select the Override link in the policy inheritance message, keep the default of Copy policy and select the Override button.
-
On the override policy that is created, select the Add User icon (
). Find or enter User2 in the User Identity field and select OK. With these changes, User1 maintains the ability to view and edit the processors on the canvas. User2 can now view and edit the GenerateFlowFile processor.
Creating a Connection
With the access policies configured as discussed in the previous two examples, User1 is able to connect GenerateFlowFile to LogAttribute:

User2 cannot make the connection:

This is because:
-
User2 does not have modify access on the process group.
-
Even though User2 has view and modify access to the source component (GenerateFlowFile), User2 does not have an access policy on the destination component (LogAttribute).
To allow User2 to connect GenerateFlowFile to LogAttribute, as User1:
-
Select the root process group. The Operate palette is updated with details for the root process group.
-
Select the Access Policies icon (
) from the Operate palette and the Access Policies dialog opens. -
Select 'modify the component' from the policy drop-down.

-
Select the Add User icon (
). Find or enter User2 and select OK.

By adding User2 to the 'modify the component' policy on the process group, User2 is added to the 'modify the component' policy on the LogAttribute processor by policy inheritance. To confirm this, highlight the LogAttribute processor and select the Access Policies icon (
) from the Operate palette:

With these changes, User2 can now connect the GenerateFlowFile processor to the LogAttribute processor.


Editing a Connection
Assume User1 or User2 adds a ReplaceText processor to the root process group:

User1 can select and change the existing connection (between GenerateFlowFile to LogAttribute) to now connect GenerateFlowFile to ReplaceText:

User 2 is unable to perform this action.

To allow User2 to connect GenerateFlowFile to ReplaceText, as User1:
-
Select the root process group. The Operate palette is updated with details for the root process group.
-
Select the Access Policies icon (
). -
Select 'view the component' from the policy drop-down.

-
Select the Add User icon (
). Find or enter User2 and select OK.

Being added to both the view and modify policies for the process group, User2 can now connect the GenerateFlowFile processor to the ReplaceText processor.

Encryption Configuration
The EncryptContent processor allows for the encryption and decryption of data, both internal to Clockspring and integrated with external systems, such as openssl and other data sources and consumers.
Key Derivation Functions
Key Derivation Functions (KDF) are mechanisms by which human-readable information, usually a password or other secret information, is translated into a cryptographic key suitable for data protection. For further information, read the Wikipedia entry on Key Derivation Functions.
OpenSSL PKCS#5 v1.5 EVP_BytesToKey
-
This KDF was added in v0.4.0.
-
This KDF is provided for compatibility with data encrypted using OpenSSL’s default PBE, known as
EVP_BytesToKey. This is a single iteration of MD5 over the concatenation of the password and 8 bytes of random ASCII salt. OpenSSL recommends usingPBKDF2for key derivation but does not expose the library method necessary to the command-line tool, so this KDF is still the de facto default for command-line encryption.
Bcrypt
-
This KDF was added in v0.5.0.
-
Bcrypt is an adaptive function based on the Blowfish cipher. This KDF is recommended as it automatically incorporates a random 16 byte salt, configurable cost parameter (or "work factor"), and is hardened against brute-force attacks using GPGPU (which share memory between cores) by requiring access to "large" blocks of memory during the key derivation. It is less resistant to FPGA brute-force attacks where the gate arrays have access to individual embedded RAM blocks.
-
Because the length of a Bcrypt-derived hash is always 184 bits, the hash output (not including the algorithm, work factor, or salt) is then fed to a
SHA-512digest and truncated to the desired key length. This provides the benefit of the avalanche effect over the input.Prior to this, the complete output (algorithm, work factor, salt, and hash output for a total of 480 bits) was provided to the SHA-512 digest function. Clockspring can transparently handle decrypting data (under 10 MiB) encrypted using a key derived via this legacy process. -
The recommended minimum work factor is 12 (212 key derivation rounds) (as of 2/1/2016 on commodity hardware) and should be increased to the threshold at which legitimate systems will encounter detrimental delays (see schedule below or use
BcryptCipherProviderGroovyTest#testDefaultConstructorShouldProvideStrongWorkFactor()to calculate safe minimums). -
The salt format is
$2a$10$ABCDEFGHIJKLMNOPQRSTUV. The salt is delimited by$and the three sections are as follows:-
2a- the version of the format. An extensive explanation can be found here. Clockspring currently uses2afor all salts generated internally. -
10- the work factor. This is actually the log2 value, so the total iteration count would be 210 (1024) in this case. -
ABCDEFGHIJKLMNOPQRSTUV- the 22 character, Radix64-encoded, unpadded, raw salt value. This decodes to a 16 byte salt used in the key derivation.The Bcrypt Radix64 encoding is not compatible with standard MIME Base64 encoding.
-
Scrypt
-
This KDF was added in v0.5.0.
-
Scrypt is an adaptive function designed in response to
bcrypt. This KDF is recommended as it requires relatively large amounts of memory for each derivation, making it resistant to hardware brute-force attacks. -
The recommended minimum cost is
N=214 (16,384),r=8,p=1 (as of 2/1/2016 on commodity hardware).pmust be a positive integer and less than(2^32 − 1) * (Hlen/MFlen)whereHlenis the length in octets of the digest function output (32 for SHA-256) andMFlenis the length in octets of the mixing function output, defined asr * 128. These parameters should be increased to the threshold at which legitimate systems will encounter detrimental delays (see schedule below or useScryptCipherProviderGroovyTest#testDefaultConstructorShouldProvideStrongParameters()to calculate safe minimums). -
The salt format is
$s0$e0101$ABCDEFGHIJKLMNOPQRSTUV. The salt is delimited by$and the three sections are as follows:-
s0- the version of the format. Clockspring currently usess0for all salts generated internally. -
e0101- the cost parameters. This is actually a hexadecimal encoding ofN,r,pusing shifts. This can be formed/parsed usingScrypt#encodeParams()andScrypt#parseParameters().-
Some external libraries encode
N,r, andpseparately in the form$4000$1$1$(Nis stored in hex encoding as0x4000, which is0d16384, or 214 as0xe=0d14). A utility method is available atScryptCipherProvider#translateSalt()which will convert the external form to the internal form.
-
-
ABCDEFGHIJKLMNOPQRSTUV- the 12-44 character, Base64-encoded, unpadded, raw salt value. This decodes to a 8-32 byte salt used in the key derivation.
-
PBKDF2
-
This KDF was added in v0.5.0.
-
Password-Based Key Derivation Function 2 is an adaptive derivation function which uses an internal pseudorandom function (PRF) and iterates it many times over a password and salt (at least 16 bytes).
-
The PRF is recommended to be
HMAC/SHA-256orHMAC/SHA-512. The use of an HMAC cryptographic hash function mitigates a length extension attack. -
The recommended minimum number of iterations is 160,000 (as of 2/1/2016 on commodity hardware). This number should be doubled every two years (see schedule below or use
PBKDF2CipherProviderGroovyTest#testDefaultConstructorShouldProvideStrongIterationCount()to calculate safe minimums). -
This KDF is not memory-hard (can be parallelized massively with commodity hardware) but is still recommended as sufficient by NIST SP 800-132 (PDF) and many cryptographers (when used with a proper iteration count and HMAC cryptographic hash function).
None
-
This KDF was added in v0.5.0.
-
This KDF performs no operation on the input and is a marker to indicate the raw key is provided to the cipher. The key must be provided in hexadecimal encoding and be of a valid length for the associated cipher/algorithm.
Argon2
-
This KDF was added in v1.12.0.
-
Argon2 is a key derivation function which won the Password Hashing Competition in 2015. This KDF is recommended as it offers a variety of modes which can be tailored to prevention of GPU attacks, prevention of side-channel attacks, or a combination of both. It allows for a variable output key length.
-
The recommended minimum cost is
memory=216 (65,536) KiB,iterations=5,parallelism=8 (as of 4/22/2020 on commodity hardware). The Argon2 specification paper (PDF) Section 9 describes an algorithm used to determine recommended parameters. These parameters should be increased to the threshold at which legitimate systems will encounter detrimental delays (useArgon2SecureHasherTest#testDefaultCostParamsShouldBeSufficient()to calculate safe minimums). -
The salt format is
$argon2id$v=19$m=65536,t=5,p=8$ABCDEFGHIJKLMNOPQRSTUV. The salt is delimited by$and the four sections are as follows:-
argon2id- the "type" of algorithm (2i,2d,2id). Clockspring currently usesargon2idfor all salts generated internally. -
v=19- the version of the algorithm in decimal (0d19=0x13). Clockspring currently uses0d19for all salts generated internally. -
m=65536,t=5,p=8- the cost parameters. This contains the memory, iterations, and parallelism in order. -
ABCDEFGHIJKLMNOPQRSTUV- the 12-44 character, Base64-encoded, unpadded, raw salt value. This decodes to a 8-32 byte salt used in the key derivation.
-
Additional Resources
Encrypted Passwords in Flows
Clockspring always stores all sensitive values (passwords, tokens, and other credentials) populated into a flow in an encrypted format on disk.
The encryption algorithm used is specified by nifi.sensitive.props.algorithm and the password from which the encryption key is derived is specified by nifi.sensitive.props.key in clockspring.properties (see Security Configuration for additional information).
Clockspring supports several configuration options to provide authenticated encryption with associated data (AEAD) using AES Galois/Counter Mode (AES-GCM). These algorithms use a strong Key Derivation Function to derive a secret key of specified length based on the sensitive properties key configured. Each Key Derivation Function uses a static salt in order to support flow configuration comparison across cluster nodes. Each Key Derivation Function also uses default iteration and cost parameters as defined in the associated secure hashing implementation class.
Property Encryption Algorithms
The following strong encryption methods can be configured in the nifi.sensitive.props.algorithm property:
-
NIFI_ARGON2_AES_GCM_256 -
NIFI_PBKDF2_AES_GCM_256
Each Key Derivation Function uses the following default parameters:
-
Argon2
-
Iterations: 5
-
Memory: 65536 KB
-
Parallelism: 8
-
-
PBKDF2
-
Iterations: 160,000
-
Pseudorandom Function Family: SHA-512
-
All options require a password (nifi.sensitive.props.key value) of at least 12 characters.
Clockspring generates a random value when nifi.sensitive.props.key is
empty. Clockspring writes the generated value to clockspring.properties and logs a warning.
Clustered installations of Clockspring require the same value to be configured on all nodes.
HashiCorp Vault providers
Two encryption providers are currently configurable in the bootstrap-hashicorp-vault.conf file:
| Provider | Provider Identifier | Description |
|---|---|---|
HashiCorp Vault Transit provider |
|
Uses HashiCorp Vault’s Transit Secrets Engine to decrypt sensitive properties. |
HashiCorp Vault Key/Value provider |
|
Retrieves sensitive values from Secrets stored in a HashiCorp Vault Key/Value (unversioned) Secrets Engine. |
Note that all HashiCorp Vault encryption providers require a running Vault instance in order to decrypt these values at Clockspring’s startup.
Following are the configuration properties available inside the bootstrap-hashicorp-vault.conf file:
Required properties
| Property Name | Description | Default |
|---|---|---|
|
The HashiCorp Vault URI (e.g., |
none |
|
Filename of a properties file containing Vault authentication properties. See the |
none |
|
If set, enables the HashiCorp Vault Transit provider. The value should be the Vault |
none |
|
If set, enables the HashiCorp Vault Key/Value provider. The value should be the Vault |
none |
Optional properties
| Property Name | Description | Default |
|---|---|---|
|
The Key/Value Secrets Engine version: |
|
|
The connection timeout of the Vault client |
|
|
The read timeout of the Vault client |
|
|
A comma-separated list of the enabled TLS cipher suites |
none |
|
A comma-separated list of the enabled TLS protocols |
none |
|
Path to a keystore. Required if the Vault server is TLS-enabled |
none |
|
Keystore type (JKS, BCFKS or PKCS12). Required if the Vault server is TLS-enabled |
none |
|
Keystore password. Required if the Vault server is TLS-enabled |
none |
|
Path to a truststore. Required if the Vault server is TLS-enabled |
none |
|
Truststore type (JKS, BCFKS or PKCS12). Required if the Vault server is TLS-enabled |
none |
|
Truststore password. Required if the Vault server is TLS-enabled |
none |
AWS KMS provider
This provider uses AWS Key Management Service for decryption. AWS KMS configuration properties can be stored in the bootstrap-aws.conf file, as referenced in bootstrap.conf. If the configuration properties are not specified in bootstrap-aws.conf, then the provider will attempt to use the AWS default credentials provider, which checks standard environment variables and system properties.
Required properties
| Property Name | Description | Default |
|---|---|---|
|
The identifier or ARN that the AWS KMS client uses for encryption and decryption. |
none |
Optional properties
All of the following must be configured, or will be ignored entirely.
| Property Name | Description | Default |
|---|---|---|
|
The AWS region used to configure the AWS KMS Client. |
none |
|
The access key ID credential used to access AWS KMS. |
none |
|
The secret access key used to access AWS KMS. |
none |
AWS Secrets Manager provider
This provider uses AWS Secrets Manager Service to store and retrieve AWS Secrets. AWS Secrets Manager configuration properties can be stored in the bootstrap-aws.conf file, as referenced in bootstrap.conf. If the configuration properties are not specified in bootstrap-aws.conf, then the provider will attempt to use the AWS default credentials provider, which checks standard environment variables and system properties.
Optional properties
All of the following must be configured, or will be ignored entirely.
| Property Name | Description | Default |
|---|---|---|
|
The AWS region used to configure the AWS Secrets Manager Client. |
none |
|
The access key ID credential used to access AWS Secrets Manager. |
none |
|
The secret access key used to access AWS Secrets Manager. |
none |
Azure Key Vault Key Provider
This protection scheme uses keys managed by Azure Key Vault Keys for encryption and decryption.
Azure Key Vault configuration properties can be stored in the bootstrap-azure.conf file, as referenced in the
bootstrap.conf of Clockspring or Registry.
The provider will use the
DefaultAzureCredential
for authentication.
The Azure Identity client library
describes the process for credentials resolution, which leverages environment variables, system properties, and falls
back to
Managed Identity
authentication.
Required properties
| Property Name | Description | Default |
|---|---|---|
|
The identifier of the key that the Azure Key Vault client uses for encryption and decryption. |
none |
|
The encryption algorithm that the Azure Key Vault client uses for encryption and decryption. |
none |
Azure Key Vault Secret Provider
This protection scheme uses secrets managed by Azure Key Vault Secrets for storing and retrieving protected properties.
Azure Key Vault configuration properties can be stored in the bootstrap-azure.conf file, as referenced in the
bootstrap.conf of Clockspring or Registry.
The provider will use the
DefaultAzureCredential
for authentication.
The Azure Identity client library
describes the process for credentials resolution, which leverages environment variables, system properties, and falls
back to
Managed Identity
authentication.
Names of secrets stored in Azure Key Vault support alphanumeric and dash characters, but do not support characters such as / or ..
For this reason, Clockspring replaces these characters with - when storing and retrieving secrets. The following table provides an example property name mapping:
| Property Context | Property Name | Secret Name |
|---|---|---|
|
|
|
Required properties
| Property Name | Description | Default |
|---|---|---|
|
URI for the Azure Key Vault service such as |
none |
Google Cloud KMS provider
This protection scheme uses Google Cloud Key Management Service (Google Cloud Key Management Service) for encryption and decryption. Google Cloud KMS configuration properties are to be stored in the bootstrap-gcp.conf file, as referenced in the bootstrap.conf of Clockspring or Registry. Credentials must be configured as per the following documentation: Google Cloud KMS documentation
Required properties
| Property Name | Description | Default |
|---|---|---|
|
The project containing the key that the Google Cloud KMS client uses for encryption and decryption. |
none |
|
The geographic region of the project containing the key that the Google Cloud KMS client uses for encryption and decryption. |
none |
|
The keyring containing the key that the Google Cloud KMS client uses for encryption and decryption. |
none |
|
The key identifier that the Google Cloud KMS client uses for encryption and decryption. |
none |
Property Context Mapping
Some encryption providers store protected values in an external service instead of persisting the encrypted values directly in the configuration file. To support this use case, a property context is defined for each protected property in Clockspring’s configuration files, in the format: {context-name}/{property-name}
-
context-name- represents a namespace for properties in order to disambiguate properties with the same name. Without additional configuration, all protected properties are assigned thedefaultcontext. -
property-name- contains the name of the property.
In order to support logical context names, mapping properties may be provided in bootstrap.conf, as follows:
nifi.bootstrap.protection.context.mapping.<context-name>=<identifier matching regex>
Here, context-name would determine the context name above, and <identifier matching regex> would map any property whose group identifier matched the provided Regular Expression. Group identifiers are defined per configuration file type, and are described as follows:
| Configuration File | Group Identifier Description | Assigned Context |
|---|---|---|
|
There is no concept of a group identifier here, since all property names should be unique. |
default |
|
The |
The mapped context name if RegEx matches the identifier, otherwise default |
|
The |
The mapped context name if RegEx matches the identifier, otherwise default |
Example
In the Clockspring binary distribution, the login-identity-providers.xml file comes with a provider with the identifier ldap-provider and a property called Manager Password:
<provider>
<identifier>ldap-provider</identifier>
<class>org.apache.nifi.ldap.LdapProvider</class>
...
<property name="Manager Password"/>
...
</provider>
Similarly, the authorizers.xml file comes with a ldap-user-group-provider and a property also called Manager Password:
<userGroupProvider>
<identifier>ldap-user-group-provider</identifier>
<class>org.apache.nifi.ldap.tenants.LdapUserGroupProvider</class>
...
<property name="Manager Password"/>
...
</userGroupProvider>
If the Manager Password is desired to reference the same exact property (e.g., the same Secret in the HashiCorp Vault K/V provider) but still be distinguished from any other Manager Password property unrelated to LDAP, the following mapping could be added:
nifi.bootstrap.protection.context.mapping.ldap=ldap-.*
This would cause both of the above to be assigned a context of "ldap/Manager Password" instead of "default/Manager Password".
Toolkit Administrative Tools
Toolkit also contains command line utilities for administrators to support Clockspring maintenance in standalone and clustered environments.
-
CLI — The
clitool enables administrators to interact with Clockspring and Registry instances to automate tasks such as deploying versioned flows and managing process groups and cluster nodes.
For more information about each utility, see the Toolkit Guide.
Clustering Configuration
This section provides a quick overview of Clustering and instructions on how to set up a basic cluster. Clockspring has provided the cluster-setup-guide.html to provide a more concise cluster configuration document.
Zero-Leader Clustering
Clockspring employs a Zero-Leader Clustering paradigm. Each node in the cluster has an identical flow and performs the same tasks on the data, but each operates on a different set of data. The cluster automatically distributes the data throughout all the active nodes.
One of the nodes is automatically elected (via Apache ZooKeeper) as the Cluster Coordinator. All nodes in the cluster will then send heartbeat/status information to this node, and this node is responsible for disconnecting nodes that do not report any heartbeat status for some amount of time. Additionally, when a new node elects to join the cluster, the new node must first connect to the currently-elected Cluster Coordinator in order to obtain the most up-to-date flow. If the Cluster Coordinator determines that the node is allowed to join (based on its configured Firewall file), the current flow is provided to that node, and that node is able to join the cluster, assuming that the node’s copy of the flow matches the copy provided by the Cluster Coordinator. If the node’s version of the flow configuration differs from that of the Cluster Coordinator’s, the node will not join the cluster.
Why Cluster?
Clockspring Administrators or DataFlow Managers (DFMs) may find that using one instance of Clockspring on a single server is not enough to process the amount of data they have. So, one solution is to run the same dataflow on multiple Clockspring servers. However, this creates a management problem, because each time DFMs want to change or update the dataflow, they must make those changes on each server and then monitor each server individually. By clustering the Clockspring servers, it’s possible to have that increased processing capability along with a single interface through which to make dataflow changes and monitor the dataflow. Clustering allows the DFM to make each change only once, and that change is then replicated to all the nodes of the cluster. Through the single interface, the DFM may also monitor the health and status of all the nodes.
Terminology
Clockspring Clustering is unique and has its own terminology. It’s important to understand the following terms before setting up a cluster:
Cluster Coordinator: A Clockspring Cluster Coordinator is the node in a Clockspring cluster that is responsible for carrying out tasks to manage which nodes are allowed in the cluster and providing the most up-to-date flow to newly joining nodes. When a DataFlow Manager manages a dataflow in a cluster, they are able to do so through the User Interface of any node in the cluster. Any change made is then replicated to all nodes in the cluster.
Nodes: Each cluster is made up of one or more nodes. The nodes do the actual data processing.
Primary Node: Every cluster has one Primary Node. On this node, it is possible to run "Isolated Processors" (see below). ZooKeeper is used to automatically elect a Primary Node. If that node disconnects from the cluster for any reason, a new Primary Node will automatically be elected. Users can determine which node is currently elected as the Primary Node by looking at the Cluster Management page of the User Interface.
Isolated Processors: In a cluster, the same dataflow runs on all the nodes. As a result, every component in the flow runs on every node. However, there may be cases when the DFM would not want every processor to run on every node. The most common case is when using a processor that communicates with an external service using a protocol that does not scale well. For example, the FetchSFTP processor pulls from a remote directory. If the FetchSFTP Processor runs on every node in the cluster and tries simultaneously to pull from the same remote directory, there could be race conditions. Therefore, the DFM could configure the FetchSFTP on the Primary Node to run in isolation, meaning that it only runs on that node. With the proper dataflow configuration, it could pull in data and load-balance it across the rest of the nodes in the cluster.
Heartbeats: The nodes communicate their health and status to the currently elected Cluster Coordinator via "heartbeats", which let the Coordinator know they are still connected to the cluster and working properly. By default, the nodes emit heartbeats every 5 seconds, and if the Cluster Coordinator does not receive a heartbeat from a node within 40 seconds (= 5 seconds * 8), it disconnects the node due to "lack of heartbeat". The 5-second and 8 times settings are configurable in the clockspring.properties file (see the Cluster Common Properties section for more information). The reason that the Cluster Coordinator disconnects the node is because the Coordinator needs to ensure that every node in the cluster is in sync, and if a node is not heard from regularly, the Coordinator cannot be sure it is still in sync with the rest of the cluster. If, after 40 seconds, the node does send a new heartbeat, the Coordinator will automatically request that the node re-join the cluster, to include the re-validation of the node’s flow. Both the disconnection due to lack of heartbeat and the reconnection once a heartbeat is received are reported to the DFM in the User Interface.
Communication within the Cluster
As noted, the nodes communicate with the Cluster Coordinator via heartbeats. When a Cluster Coordinator is elected, it updates a well-known ZNode in Apache ZooKeeper with its connection information so that nodes understand where to send heartbeats. If one of the nodes goes down uncleanly, the other nodes in the cluster will not automatically pick up the load of the missing node. It is possible for the DFM to configure the dataflow for failover contingencies; however, this is dependent on the dataflow design and does not happen automatically.
When the DFM makes changes to the dataflow, the node that receives the request to change the flow communicates those changes to all nodes and waits for each node to respond, indicating that it has made the change on its local flow.
Managing Nodes
Disconnect Nodes
A DFM may manually disconnect a node from the cluster. A node may also become disconnected for other reasons, such as due to a lack of heartbeat. The Cluster Coordinator will show a bulletin on the User Interface when a node is disconnected. The DFM will not be able to make any changes to the dataflow until the issue of the disconnected node is resolved. The DFM or the Administrator will need to troubleshoot the issue with the node and resolve it before any new changes can be made to the dataflow. However, it is worth noting that just because a node is disconnected does not mean that it is not working. This may happen for a few reasons, for example when the node is unable to communicate with the Cluster Coordinator due to network problems.
To manually disconnect a node, select the "Disconnect" icon (
) from the node’s row.
A disconnected node can be connected (
), offloaded (
) or deleted (
).
| Not all nodes in a "Disconnected" state can be offloaded. If the node is disconnected and unreachable, the offload request can not be received by the node to start the offloading. Additionally, offloading may be interrupted or prevented due to firewall rules. |
Offload Nodes
Flowfiles that remain on a disconnected node can be rebalanced to other active nodes in the cluster via offloading. In the Cluster Management dialog, select the "Offload" icon (
) for a Disconnected node. This will stop all processors, terminate all processors, stop transmitting on all remote process groups and rebalance FlowFiles to the other connected nodes in the cluster.
Nodes that remain in "Offloading" state due to errors encountered (out of memory, no network connection, etc.) can be reconnected to the cluster by restarting Clockspring on the node. Offloaded nodes can be either reconnected to the cluster (by selecting Connect or restarting Clockspring on the node) or deleted from the cluster.
| Clockspring automatically offloads nodes when the service is stopped using 'systemctl stop clockspring' or running '/opt/clockspring/bin/clockspring.sh stop' when these nodes are configured to be part of a cluster. |
Delete Nodes
There are cases where a DFM may wish to continue making changes to the flow, even though a node is not connected to the cluster. In this case, the DFM may elect to delete the node from the cluster entirely. In the Cluster Management dialog, select the "Delete" icon (
) for a Disconnected or Offloaded node. Once deleted, the node cannot be rejoined to the cluster until it has been restarted.
Flow Election
When a cluster first starts up, Clockspring must determine which of the nodes have the
"correct" version of the flow. This is done by voting on the flows that each of the nodes has. When a node
attempts to connect to a cluster, it provides a copy of its local flow and (if the policy provider allows for configuration via Clockspring)
its users, groups, and policies, to the Cluster Coordinator. If no flow
has yet been elected the "correct" flow, the node’s flow is compared to each of the other Nodes' flows. If another
Node’s flow matches this one, a vote is cast for this flow. If no other Node has reported the same flow yet, this
flow will be added to the pool of possibly elected flows with one vote. After
some amount of time has elapsed (configured by setting the nifi.cluster.flow.election.max.wait.time property) or
some number of Nodes have cast votes (configured by setting the nifi.cluster.flow.election.max.candidates property),
a flow is elected to be the "correct" copy of the flow.
Any node whose dataflow, users, groups, and policies conflict with those elected will backup any conflicting resources and replace the local
resources with those from the cluster. How the backup is performed depends on the configured Access Policy Provider and User Group Provider.
For file-based access policy providers, the backup will be written to the same directory as the existing file (e.g., $CLOCKSPRING_HOME/conf) and bear the same
name but with a suffix of "." and a timestamp. For example, if the flow itself conflicts with the cluster’s flow at 12:05:03 on January 1, 2020,
the node’s flow.json.gz file will be copied to flow.json.gz.2020-01-01-12-05-03 and the cluster’s flow will then be written to flow.json.gz.
Similarly, this will happen for the users.xml and authorizations.xml file. This is done so that the flow can be manually reverted if necessary
by renaming the backup file back to flow.json.gz, for example.
It is important to note that before inheriting the elected flow, Clockspring will first read through the FlowFile repository and any swap files to determine which queues in the dataflow currently hold data. If there exists any queue in the dataflow that contains a FlowFile, that queue must also exist in the elected dataflow. If that queue does not exist in the elected dataflow, the node will not inherit the dataflow, users, groups, and policies. Instead, Clockspring will log errors to that effect and will fail to startup. This ensures that even if the node has data stored in a connection, and the cluster’s dataflow is different, restarting the node will not result in data loss.
Election is performed according to the "popular vote" with the caveat that the winner will never be an "empty flow" unless all flows are empty. This allows an administrator to remove a node’s flow.json.gz file and restart the node, knowing that the node’s flow will not be voted to be the "correct" flow unless no other flow is found. If there are two non-empty flows that receive the same number of votes, one of those flows will be chosen. The methodology used to determine which of those flows is undefined and may change at any time without notice.
Basic Cluster Setup
This section describes the setup for a simple three-node secure cluster comprised of three instances of Clockspring.
For each instance, certain properties in the clockspring.properties file will need to be updated. In particular, the Web and Clustering properties should be evaluated for your situation and adjusted accordingly. All the properties are described in the System Properties section of this guide; however, in this section, we will focus on the minimum properties that must be set for a simple cluster.
For all three instances, the Cluster Common Properties can be left with the default settings. Note, however, that if you change these settings, they must be set the same on every instance in the cluster.
For each Node, the minimum properties to configure are as follows:
-
Under the Web Properties section, set the HTTPS port that you want the Node to run on.
-
Under the State Management section, set the
nifi.state.management.provider.clusterproperty to the identifier of the Cluster State Provider. Ensure that the Cluster State Provider has been configured in the state-management.xml file. See Configuring State Providers for more information. -
Under Cluster Node Properties, set the following:
-
nifi.cluster.is.node- Set this to true. -
nifi.cluster.node.address- Set this to the fully qualified hostname of the node. If left blank, it defaults tolocalhost. -
nifi.cluster.node.protocol.port- Set this to an open port that is higher than 1024 (anything lower requires root). -
nifi.cluster.node.protocol.max.threads- The maximum number of threads that should be used to communicate with other nodes in the cluster. This property defaults to50. A thread pool is used for replicating requests to all nodes. The thread pool will increase the number of active threads to the limit set by this property. It is typically recommended that this property be set to 4-8 times the number of nodes in your cluster. There could be up ton+2threads for a given request, wheren= number of nodes in your cluster. As an example, if 4 requests are made, a 5 node cluster will use4 * 7 = 28threads. -
nifi.cluster.flow.election.max.wait.time- Specifies the amount of time to wait before electing a Flow as the "correct" Flow. If the number of Nodes that have voted is equal to the number specified by thenifi.cluster.flow.election.max.candidatesproperty, the cluster will not wait this long. The default value is5 mins. Note that the time starts as soon as the first vote is cast. -
nifi.cluster.flow.election.max.candidates- Specifies the number of Nodes required in the cluster to cause early election of Flows. This allows the Nodes in the cluster to avoid having to wait a long time before starting processing if we reach at least this number of nodes in the cluster.
-
ZooKeeper Clustering
The following application properties support clustering with Apache ZooKeeper:
-
nifi.cluster.leader.election.implementation
The Leader Election Implementation must be set to CuratorLeaderElectionManager for clustering with Apache ZooKeeper.
The implementation defaults to ZooKeeper-based clustering when this property is not specified.
-
nifi.zookeeper.connect.string
The Connect String that is needed to connect to Apache ZooKeeper. This is a comma-separated list
of hostname:port pairs. For example, node1:2181,node2:2181,node3:2181. This should contain a list of all ZooKeeper
instances in the ZooKeeper quorum.
-
nifi.zookeeper.root.node
The root ZNode that should be used in ZooKeeper. ZooKeeper provides a directory-like structure
for storing data. Each 'directory' in this structure is referred to as a ZNode. This denotes the root ZNode, or 'directory',
that should be used for storing data. The default value is /root. This is important to set correctly, as which cluster
the Clockspring instance attempts to join is determined by which ZooKeeper instance it connects to and the ZooKeeper Root Node
that is specified.
Cluster Firewall Configuration
Clockspring clustering supports network access restrictions using a custom firewall configuration.
The nifi.cluster.firewall.file property can be configured with a path to a file containing hostnames, IP addresses, or
subnets of permitted nodes. The Cluster Coordinator uses the configuration to determine whether to accept or reject
heartbeats and connection requests from potential cluster members.
The configuration file format expects one entry per line and ignores lines beginning with the # character. Clockspring uses
standard Java host name resolution to convert names to IP addresses. Java host name resolution leverages a combination
of local machine configuration and network services, such as DNS. The configuration file supports IPv4 addresses or subnet
ranges using CIDR notation. The following example cluster firewall configuration includes a combination of supported entries:
# Cluster Node Hostnames nifi0.example.com nifi1.example.com nifi3.example.com # Cluster Node Addresses 192.168.0.1 192.168.0.2 192.168.0.3 # Cluster Subnet Address 192.168.0.0/29 # Address Range from 192.168.0.1 to 192.168.0.6
Troubleshooting
If you encounter issues and your cluster does not work as described, investigate the application.log and user.log
files on the nodes. If needed, you can change the logging level to DEBUG by editing the conf/logback.xml file. Specifically,
set the level="DEBUG" in the following line (instead of "INFO"):
<logger name="org.apache.nifi.web.api.config" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
State Management
Clockspring provides a mechanism for Processors, Reporting Tasks, Controller Services, and the framework itself to persist state. This allows a Processor, for example, to resume from the place where it left off after Clockspring is restarted. Additionally, it allows for a Processor to store some piece of information so that the Processor can access that information from all of the different nodes in the cluster. This allows one node to pick up where another node left off, or to coordinate across all of the nodes in a cluster.
Configuring State Providers
When a component decides to store or retrieve state, it does so by providing a Scope, either Local to the node or
applicable to the entire Cluster. Component implementation code and configuration properties determine the requested
Scope, which the framework provides according to the State Management configuration. The clockspring.properties configuration
contains several properties for managing these State Providers.
| Property | Description |
|---|---|
|
The configuration file specifies the path to an external XML file that the framework uses to configure State Providers. This XML file may contain configurations for multiple providers. |
|
The Local Provider stores current Local State information. The property value identifies a Local Provider in the State Management configuration that the framework will use for storing and retrieving Local State for requesting components. |
|
The Cluster Provider stores current Cluster State information. The property value identifies a Cluster Provider in the State Management configuration that the framework will use for storing and retrieving Cluster State for requesting components. |
|
The Previous Cluster State Provider enables population of the current Cluster State from an existing Provider. The property value identifies a Cluster Provider in the State Management configuration that the framework will use as the initial source of Cluster State when the current Cluster State Provider is has no information stored. The framework enumerates the Current Cluster Provider when a node becomes Primary, and proceeds to check the Previous Cluster Provider when the Current Cluster Provider does not contain any component information. The Previous Cluster Provider property value can be set to blank after cluster startup following a successful Cluster State restore from backup. The default value is blank. |
This XML file consists of a top-level state-management element, which has one or more local-provider and zero or more cluster-provider
elements. Each of these elements then contains an id element that is used to specify the identifier that can be referenced in the
clockspring.properties file, as well as a class element that specifies the fully-qualified class name to use in order to instantiate the State
Provider. Finally, each of these elements may have zero or more property elements. Each property element has an attribute, name that is the name
of the property that the State Provider supports. The textual content of the property element is the value of the property.
Once these State Providers have been configured in the state-management.xml file (or whatever file is configured), those Providers may be referenced by their identifiers.
While there are not many properties that need to be configured for these providers, they were externalized into a separate state-management.xml file, rather than being configured via the clockspring.properties file, simply because different implementations may require different properties, and it is easier to maintain and understand the configuration in an XML-based file such as this, than to mix the properties of the Provider in with other Clockspring framework-specific properties.
It should be noted that if Processors and other components save state using the Clustered scope, the Local State Provider will be used if the instance is a standalone instance (not in a cluster) or is disconnected from the cluster. This also means that if a standalone instance is migrated to become a cluster, then that state will no longer be available, as the component will begin using the Clustered State Provider instead of the Local State Provider.
If Clockspring is configured to run in a standalone mode, the cluster-provider element need not be populated in the state-management.xml
file and will actually be ignored if they are populated. However, the local-provider element must always be present and populated.
Additionally, if Clockspring is run in a cluster, each node must also have the cluster-provider element present and properly configured.
Otherwise, Clockspring will fail to startup.
Local State Provider
By default, the Local State Provider is configured to be a WriteAheadLocalStateProvider that persists the data to the
$CLOCKSPRING_HOME/state/local directory.
ZooKeeper Cluster State Provider
The default Cluster State Provider is configured to be a ZooKeeperStateProvider. The default
ZooKeeper-based provider must have its Connect String property populated before it can be used. It is also advisable, if multiple Clockspring instances
will use the same ZooKeeper instance, that the value of the Root Node property be changed. For instance, one might set the value to
/nifi/<team name>/production. A Connect String takes the form of comma separated <host>:<port> tuples, such as
my-zk-server1:2181,my-zk-server2:2181,my-zk-server3:2181. In the event a port is not specified for any of the hosts, the ZooKeeper default of
2181 is assumed.
When adding data to ZooKeeper, there are two options for Access Control: Open and CreatorOnly. If the Access Control property is
set to Open, then anyone is allowed to log into ZooKeeper and have full permissions to see, change, delete, or administer the data.
If CreatorOnly is specified, then only the user that created the data is allowed to read, change, delete, or administer the data.
In order to use the CreatorOnly option, Clockspring must provide some form of authentication. See the ZooKeeper Access Control
section below for more information on how to configure authentication.
ZooKeeper Access Control
ZooKeeper provides Access Control to its data via an Access Control List (ACL) mechanism. When data is written to ZooKeeper, Clockspring will provide an ACL
that indicates that any user is allowed to have full permissions to the data, or an ACL that indicates that only the user that created the data is
allowed to access the data. Which ACL is used depends on the value of the Access Control property for the ZooKeeperStateProvider (see the
Configuring State Providers section for more information).
In order to use an ACL that indicates that only the Creator is allowed to access the data, we need to tell ZooKeeper who the Creator is. There are three mechanisms for accomplishing this. The first mechanism is to provide authentication using Kerberos. See Kerberizing Clockspring’s ZooKeeper Client for more information.
The second option, which additionally ensures that network communication is encrypted, is to authenticate using an X.509 certificate on a TLS-enabled ZooKeeper server. See Securing ZooKeeper with TLS for more information.
The third option is to use a username and password. This is configured by specifying a value for the Username and a value for the Password properties
for the ZooKeeperStateProvider (see the Configuring State Providers section for more information). The important thing to keep in mind here, though, is that ZooKeeper
will pass around the password in plain text. This means that using a username and password should not be used unless ZooKeeper is running on localhost as a
one-instance cluster, or if communications with ZooKeeper occur only over encrypted communications, such as a VPN or an SSL connection.
Securing ZooKeeper with Kerberos
When Clockspring communicates with ZooKeeper, all communications, by default, are non-secure, and anyone who logs into ZooKeeper is able to view and manipulate all of the Clockspring state that is stored in ZooKeeper. To prevent this, one option is to use Kerberos to manage authentication.
In order to secure the communications with Kerberos, we need to ensure that both the client and the server support the same configuration. Instructions for configuring the Clockspring ZooKeeper client and embedded ZooKeeper server to use Kerberos are provided below.
If Kerberos is not already setup in your environment, you can find information on installing and setting up a Kerberos Server at Red Hat Customer Portal: Configuring a Kerberos 5 Server. This guide assumes that Kerberos already has been installed in the environment in which Clockspring is running.
Note, the following procedures for kerberizing an Embedded ZooKeeper server in your Clockspring Node and kerberizing a ZooKeeper Clockspring client will require that Kerberos client libraries be installed. This is accomplished in Fedora-based Linux distributions via:
yum install krb5-workstation
Once this is complete, the /etc/krb5.conf will need to be configured appropriately for your organization’s Kerberos environment.
Kerberizing Embedded ZooKeeper Server
The krb5.conf file on the systems with the embedded zookeeper servers should be identical to the one on the system where the krb5kdc service is running. When using the embedded ZooKeeper server, we may choose to secure the server by using Kerberos. All nodes configured to launch an embedded ZooKeeper and using Kerberos should follow these steps. When using the embedded ZooKeeper server, we may choose to secure the server by using Kerberos. All nodes configured to launch an embedded ZooKeeper and using Kerberos should follow these steps.
In order to use Kerberos, we first need to generate a Kerberos Principal for our ZooKeeper servers. The following command is run on the server where the krb5kdc service is running. This is accomplished via the kadmin tool:
kadmin: addprinc "zookeeper/myHost.example.com@EXAMPLE.COM"
Here, we are creating a Principal with the primary zookeeper/myHost.example.com, using the realm EXAMPLE.COM. We need to use a Principal whose
name is <service name>/<instance name>. In this case, the service is zookeeper and the instance name is myHost.example.com (the fully qualified name of our host).
Next, we will need to create a KeyTab for this Principal, this command is run on the server with the Clockspring instance with an embedded zookeeper server:
kadmin: xst -k zookeeper-server.keytab zookeeper/myHost.example.com@EXAMPLE.COM
This will create a file in the current directory named zookeeper-server.keytab. We can now copy that file into the $CLOCKSPRING_HOME/conf/ directory. We should ensure
that only the user that will be running Clockspring is allowed to read this file.
We will need to repeat the above steps for each of the instances of Clockspring that will be running the embedded ZooKeeper server, being sure to replace myHost.example.com with
myHost2.example.com, or whatever fully qualified hostname the ZooKeeper server will be run on.
Now that we have our KeyTab for each of the servers that will be running Clockspring, we will need to configure Clockspring’s embedded ZooKeeper server to use this configuration.
ZooKeeper uses the Java Authentication and Authorization Service (JAAS), so we need to create a JAAS-compatible file In the $CLOCKSPRING_HOME/conf/ directory, create a file
named zookeeper-jaas.conf (this file will already exist if the Client has already been configured to authenticate via Kerberos. That’s okay, just add to the file).
We will add to this file, the following snippet:
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="./conf/zookeeper-server.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper/myHost.example.com@EXAMPLE.COM";
};
Be sure to replace the value of principal above with the appropriate Principal, including the fully qualified domain name of the server.
Next, we need to tell Clockspring to use this as our JAAS configuration. This is done by setting a JVM System Property, so we will edit the conf/bootstrap.conf file. If the Client has already been configured to use Kerberos, this is not necessary, as it was done above. Otherwise, we will add the following line to our bootstrap.conf file:
java.arg.15=-Djava.security.auth.login.config=./conf/zookeeper-jaas.conf
| This additional line in the file doesn’t have to be number 15, it just has to be added to the bootstrap.conf file. Use whatever number is appropriate for your configuration. |
We will want to initialize our Kerberos ticket by running the following command:
kinit –kt zookeeper-server.keytab "zookeeper/myHost.example.com@EXAMPLE.COM"
Again, be sure to replace the Principal with the appropriate value, including your realm and your fully qualified hostname.
Finally, we need to tell the Kerberos server to use the SASL Authentication Provider. To do this, we edit the $CLOCKSPRING_HOME/conf/zookeeper.properties file and add the following lines:
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
kerberos.removeHostFromPrincipal=true
kerberos.removeRealmFromPrincipal=true
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
The kerberos.removeHostFromPrincipal and the kerberos.removeRealmFromPrincipal properties are used to normalize the user principal name before comparing an identity to acls
applied on a Znode. By default the full principal is used however setting the kerberos.removeHostFromPrincipal and the kerberos.removeRealmFromPrincipal properties to true will instruct
ZooKeeper to remove the host and the realm from the logged in user’s identity for comparison. In cases where Clockspring nodes (within the same cluster) use principals that
have different host(s)/realm(s) values, these kerberos properties can be configured to ensure that the nodes' identity will be normalized and that the nodes will have
appropriate access to shared Znodes in ZooKeeper.
The last line is optional but specifies that clients MUST use Kerberos to communicate with our ZooKeeper instance.
Now, we can start Clockspring, and the embedded ZooKeeper server will use Kerberos as the authentication mechanism.
Kerberizing Clockspring’s ZooKeeper Client
| The Clockspring nodes running the embedded zookeeper server will also need to follow the below procedure since they will also be acting as a client at the same time. |
The preferred mechanism for authenticating users with ZooKeeper is to use Kerberos. In order to use Kerberos to authenticate, we must configure a few
system properties, so that the ZooKeeper client knows who the user is and where the KeyTab file is. All nodes configured to store cluster-wide state
using ZooKeeperStateProvider and using Kerberos should follow these steps.
First, we must create the Principal that we will use when communicating with ZooKeeper. This is generally done via the kadmin tool:
kadmin: addprinc "nifi@EXAMPLE.COM"
A Kerberos Principal is made up of three parts: the primary, the instance, and the realm. Here, we are creating a Principal with the primary clockspring,
no instance, and the realm EXAMPLE.COM. The primary (clockspring, in this case) is the identifier that will be used to identify the user when authenticating
via Kerberos.
After we have created our Principal, we will need to create a KeyTab for the Principal:
kadmin: xst -k nifi.keytab nifi@EXAMPLE.COM
This keytab file can be copied to the other Clockspring nodes with embedded zookeeper servers.
This will create a file in the current directory named nifi.keytab. We can now copy that file into the $CLOCKSPRING_HOME/conf/ directory. We should ensure
that only the user that will be running Clockspring is allowed to read this file.
Next, we need to configure Clockspring to use this KeyTab for authentication. Since ZooKeeper uses the Java Authentication and Authorization Service (JAAS), we need to
create a JAAS-compatible file. In the $CLOCKSPRING_HOME/conf/ directory, create a file named zookeeper-jaas.conf and add to it the following snippet:
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="./conf/nifi.keytab"
storeKey=true
useTicketCache=false
principal="nifi@EXAMPLE.COM";
};
We then need to tell Clockspring to use this as our JAAS configuration. This is done by setting a JVM System Property, so we will edit the conf/bootstrap.conf file. We add the following line anywhere in this file in order to tell the Clockspring JVM to use this configuration:
java.arg.15=-Djava.security.auth.login.config=./conf/zookeeper-jaas.conf
Finally we need to update clockspring.properties to ensure that Clockspring knows to apply SASL specific ACLs for the Znodes it will create in ZooKeeper for cluster management. To enable this, in the $CLOCKSPRING_HOME/conf/clockspring.properties file and edit the following properties as shown below:
nifi.zookeeper.auth.type=sasl
nifi.zookeeper.kerberos.removeHostFromPrincipal=true
nifi.zookeeper.kerberos.removeRealmFromPrincipal=true
The kerberos.removeHostFromPrincipal and kerberos.removeRealmFromPrincipal should be consistent with what is set in ZooKeeper configuration.
|
We can initialize our Kerberos ticket by running the following command:
kinit -kt nifi.keytab nifi@EXAMPLE.COM
Now, when we start Clockspring, it will use Kerberos to authentication as the clockspring user when communicating with ZooKeeper.
Troubleshooting Kerberos Configuration
When using Kerberos, it is import to use fully-qualified domain names and not use localhost. Please ensure that the fully qualified hostname of each server is used in the following locations:
-
conf/zookeeper.properties file should use FQDN for
server.1,server.2, …,server.Nvalues. -
The
Connect Stringproperty of the ZooKeeperStateProvider -
The /etc/hosts file should also resolve the FQDN to an IP address that is not
127.0.0.1.
Failure to do so, may result in errors similar to the following:
2016-01-08 16:08:57,888 ERROR [pool-26-thread-1-SendThread(localhost:2181)] o.a.zookeeper.client.ZooKeeperSaslClient An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)]) occurred when evaluating ZooKeeper Quorum Member's received SASL token. ZooKeeper Client will go to AUTH_FAILED state.
If there are problems communicating or authenticating with Kerberos, this Troubleshooting Guide may be of value.
One of the most important notes in the above Troubleshooting guide is the mechanism for turning on Debug output for Kerberos.
This is done by setting the sun.security.krb5.debug environment variable.
In Clockspring, this is accomplished by adding the following line to the $CLOCKSPRING_HOME/conf/bootstrap.conf file:
java.arg.16=-Dsun.security.krb5.debug=true
This will cause the debug output to be written to the application log file. By default, this is located at $CLOCKSPRING_HOME/logs/application.log. This output can be rather verbose but provides extremely valuable information for troubleshooting Kerberos failures.
Securing ZooKeeper with TLS
As discussed above, communications with ZooKeeper are insecure by default. The second option for securely authenticating to and communicating with ZooKeeper is to use certificate-based authentication with a TLS-enabled ZooKeeper server (available since ZooKeeper’s 3.5.x releases). Instructions for enabling TLS on an external ZooKeeper ensemble can be found in the ZooKeeper Administrator’s Guide.
Once you have a TLS-enabled instance of ZooKeeper, TLS can be enabled for the Clockspring client by setting nifi.zookeeper.client.secure=true. By default, the ZooKeeper client will use the existing nifi.security.* properties for the keystore and truststore. If you require separate TLS configuration for ZooKeeper, you can create a separate keystore and truststore and configure the following properties
in the $CLOCKSPRING_HOME/conf/clockspring.properties file:
| Property Name | Description | Default |
|---|---|---|
|
Whether to enable ZooKeeper client Ensemble Tracking. |
true |
|
Whether to access ZooKeeper using client TLS. |
false |
|
Filename of the Keystore containing the private key to use when communicating with ZooKeeper. |
none |
|
Optional. The type of the Keystore. Must be |
none |
|
The password for the Keystore. |
none |
|
Filename of the Truststore that will be used to verify the ZooKeeper server(s). |
none |
|
Optional. The type of the Truststore. Must be |
none |
|
The password for the Truststore. |
none |
Whether using the default security properties or the ZooKeeper specific properties, the keystore and truststores must contain the appropriate keys and certificates for use with ZooKeeper (i.e., the keys and certificates need to align with the ZooKeeper configuration either way).
After updating the above properties and starting Clockspring, network communication with ZooKeeper will be secure and ZooKeeper will now use the Clockspring node’s certificate principal when authenticating access. This will be reflected in log messages like the following on the ZooKeeper server:
2020-02-24 23:37:52,671 [myid:2] - INFO [nioEventLoopGroup-4-1:X509AuthenticationProvider@172] - Authenticated Id 'CN=node1,OU=CLOCKSPRING' for Scheme 'x509'
ZooKeeper uses Netty to support network encryption and certificate-based authentication. When TLS is enabled, both the ZooKeeper server and its clients must be configured to use Netty-based
connections instead of the default NIO implementations. This is configured automatically for Clockspring when nifi.zookeeper.client.secure is set to
true. Once Netty is enabled, you should see log messages like the following in $CLOCKSPRING_HOME/logs/application.log:
2020-02-24 23:37:54,082 INFO [nioEventLoopGroup-3-1] o.apache.zookeeper.ClientCnxnSocketNetty SSL handler added for channel: [id: 0xa831f9c3]
2020-02-24 23:37:54,104 INFO [nioEventLoopGroup-3-1] o.apache.zookeeper.ClientCnxnSocketNetty channel is connected: [id: 0xa831f9c3, L:/172.17.0.4:56510 - R:8e38869cd1d1/172.17.0.3:2281]
Bootstrap Properties
The bootstrap.conf file in the conf directory allows users to configure settings for how Clockspring should be started.
This includes parameters, such as the size of the Java Heap, what Java command to run, and Java System Properties.
Here, we will address the different properties that are made available in the file. Any changes to this file will take effect only after Clockspring has been stopped and restarted.
| Property | Description |
|---|---|
|
Specifies the fully qualified java command to run. By default, it is simply |
|
The username to run Clockspring as. For instance, if Clockspring should be run as the |
|
Whether or not to preserve shell environment while using |
|
The lib directory to use for Clockspring. By default, this is set to |
|
The conf directory to use for Clockspring. By default, this is set to |
|
When Clockspring is instructed to shutdown, the Bootstrap will wait this number of seconds for the process to shutdown cleanly. At this amount of time,
if the service is still running, the Bootstrap will |
|
Any number of JVM arguments can be passed to the Clockspring JVM when the process is started. These arguments are defined by adding properties to bootstrap.conf that
begin with |
|
HTTP URL on which Clockspring listens for management requests. Defaults to |
Proxy Configuration
When running Clockspring behind a proxy there are a couple of key items to be aware of during deployment.
-
Clockspring is comprised of a number of web applications (web UI, web API, documentation, custom UIs, data viewers, etc), so the mapping needs to be configured for the root path. That way all context paths are passed through accordingly. For instance, if only the
/nificontext path was mapped, the custom UI for UpdateAttribute will not work, since it is available at/update-attribute-ui-<version>. -
Clockspring’s REST API will generate URIs for each component on the graph. Since requests are coming through a proxy, certain elements of the URIs being generated need to be overridden. Without overriding, the users will be able to view the dataflow on the canvas but will be unable to modify existing components. Requests will be attempting to call back directly to Clockspring, not through the proxy. The elements of the URI can be overridden by adding the following HTTP headers when the proxy generates the HTTP request to the Clockspring instance:
X-ProxyScheme - the scheme to use to connect to the proxy X-ProxyHost - the host of the proxy X-ProxyPort - the port the proxy is listening on X-ProxyContextPath - the path configured to map to the Clockspring instance
-
If Clockspring is running securely, any proxy needs to be authorized to proxy user requests. These can be configured in the Clockspring UI through the Global Menu. Once these permissions are in place, proxies can begin proxying user requests. The end user identity must be relayed in a HTTP header. For example, if the end user sent a request to the proxy, the proxy must authenticate the user. Following this the proxy can send the request to Clockspring. In this request an HTTP header should be added as follows.
X-ProxiedEntitiesChain: <end-user-identity>
If the proxy is configured to send to another proxy, the request to Clockspring from the second proxy should contain a header as follows.
X-ProxiedEntitiesChain: <end-user-identity><proxy-1-identity>
An example Apache proxy configuration that sets the required properties may look like the following. Complete proxy configuration is outside of the scope of this document. Please refer the documentation of the proxy for guidance for your deployment environment and use case.
...
<Location "/clockspring">
...
SSLEngine On
SSLCertificateFile /path/to/proxy/certificate.crt
SSLCertificateKeyFile /path/to/proxy/key.key
SSLCACertificateFile /path/to/ca/certificate.crt
SSLVerifyClient require
RequestHeader add X-ProxyScheme "https"
RequestHeader add X-ProxyHost "proxy-host"
RequestHeader add X-ProxyPort "443"
RequestHeader add X-ProxyContextPath "/clockspring"
RequestHeader add X-ProxiedEntitiesChain "<%{SSL_CLIENT_S_DN}>"
ProxyPass https://clockspring-host:8443
ProxyPassReverse https://clockspring-host:8443
...
</Location>
...
-
Additional Clockspring proxy configuration must be updated to allow expected Host and context paths HTTP headers.
-
By default, if Clockspring is running securely it will only accept HTTP requests with a Host header matching the host[:port] that it is bound to. If Clockspring is to accept requests directed to a different host[:port] the expected values need to be configured. This may be required when running behind a proxy or in a containerized environment. This is configured in a comma separated list in clockspring.properties using the
nifi.web.proxy.hostproperty (e.g.localhost:18443, proxyhost:443). IPv6 addresses are accepted. Please refer to RFC 5952 Sections 4 and 6 for additional details. -
Clockspring will only accept HTTP requests with a X-ProxyContextPath, X-Forwarded-Context, or X-Forwarded-Prefix header if the value is allowed in the
nifi.web.proxy.context.pathproperty in clockspring.properties. This property accepts a comma separated list of expected values. In the event an incoming request has an X-ProxyContextPath, X-Forwarded-Context, or X-Forwarded-Prefix header value that is not present in the allow list, the "An unexpected error has occurred" page will be shown and an error will be written to the application.log.
-
-
Additional configurations at both proxy server and Clockspring cluster are required to make Site-to-Site work behind reverse proxies. See [site_to_site_reverse_proxy_properties] for details.
-
In order to transfer data via Site-to-Site protocol through reverse proxies, both proxy and Site-to-Site client Clockspring users need to have following policies, 'retrieve site-to-site details', 'receive data via site-to-site' for input ports, and 'send data via site-to-site' for output ports.
-
Session Affinity
All HTTP requests from a single client must be routed to the same Clockspring node for the duration of an authenticated session. This applies to both browser-based users and programmatic clients accessing the REST API. This is not a concern for standalone deployments or direct network access to Clockspring, but accessing clustered nodes through a proxy server or load balancer requires enabling session affinity, also known as sticky sessions. Session affinity is required for mediated access to traditional cluster deployments as well as containerized deployments using platforms such as Kubernetes.
Access to clustered deployments through a gateway requires session affinity for the following reasons:
-
Each node uses a local key for signing and verifying JSON Web Tokens
-
Each node uses a local cache for tracking configuration change transactions
Attempting to access a clustered node through a gateway without session affinity will result in intermittent failures of various types. When authenticating to Clockspring with username and password credentials, the lack of session affinity often results in HTTP 401 Unauthorized responses, indicating that the node did not accept the JSON Web Token. These failures can occur at different times based on the load balancing strategy. Accessing Clockspring using an X.509 certificate avoids the verification issues associated with JSON Web Tokens, but is still subject to problems related to configuration change transaction handling across cluster nodes.
Session Affinity Configuration
Enabling session affinity requires different settings depending on the product or service providing access. It is essential that the session affinity configuration has a timeout that is greater than the session expiration when authenticating with username and password credentials.
Apache HTTP Server Configuration
Apache HTTP Server supports session affinity in the mod_proxy module using the ProxyPass directive with the stickysession parameter to configure a cookie name for request routing.
Analytics Framework
Clockspring has an internal analytics framework which can be enabled to predict back pressure occurrence, given the configured settings for threshold on a queue. The model used by default for prediction is an ordinary least squares (OLS) linear regression. It uses recent observations from a queue (either number of objects or content size over time) and calculates a regression line for that data. The line’s equation is then used to determine the next value that will be reached within a given time interval (e.g. number of objects in queue in the next 5 minutes). Below is an example graph of the linear regression model for Queue/Object Count over time which is used for predictions:

In order to generate predictions, local status snapshot history is queried to obtain enough data to generate a model. By default, component status snapshots are captured every minute. Internal models need at least 2 or more observations to generate a prediction, therefore it may take up to 2 or more minutes for predictions to be available by default. If predictions are needed sooner than what is provided by default, the timing of snapshots can be adjusted using the nifi.components.status.snapshot.frequency value in clockspring.properties.
Clockspring evaluates the model’s effectiveness before sending prediction information by using the model’s R-Squared score by default. One important note: R-Square is a measure of how close the regression line fits the observation data vs. how accurate the prediction will be; therefore there may be some measure of error. If the R-Squared score for the calculated model meets the configured threshold (as defined by nifi.analytics.connection.model.score.threshold) then the model will be used for prediction. Otherwise the model will not be used and predictions will not be available until a model is generated with a score that exceeds the threshold. Default R-Squared threshold value is .90 however this can be tuned based on prediction requirements.
The prediction interval nifi.analytics.predict.interval can be configured to project out further when back pressure will occur. The prediction query interval nifi.analytics.query.interval can also be configured to determine how far back in time past observations should be queried in order to generate the model. Adjustments to these settings may require tuning of the model’s scoring threshold value to select a score that can offer reasonable predictions.
See Analytics Properties for complete information on configuring analytic properties.
System Properties
Clockspring is configured using the clockspring.properties file located in the conf directory. This file controls how Clockspring runs and allows you to override default settings.
Clockspring loads all settings from nifi.properties first, then overrides any matching keys with values from clockspring.properties. If a property is not defined in clockspring.properties, the value from nifi.properties is used.
| You should never modify nifi.properties directly. This file is overwritten during upgrades, and any manual changes will be lost. To change a setting, copy it from nifi.properties into clockspring.properties and modify it there. |
| The clockspring.properties file is not modified or replaced during upgrades. Any custom settings in this file will persist across versions. |
| When specifying durations or sizes, always include a unit (e.g., 10 secs, 10 MB)—bare numbers like 10 are not valid. |
| Restart Clockspring after making changes to clockspring.properties for the updates to take effect. |
Upgrade Recommendations
While both clockspring.properties and nifi.properties are relatively stable, their contents can change between releases. Always review these files during an upgrade to identify new or updated properties.
Reusing your existing clockspring.properties and other config files prevents unnecessary reconfiguration after each upgrade. See Upgrading Clockspring for more details.
Core Properties
The first section of the clockspring.properties file is for the Core Properties. These properties apply to the core framework as a whole.
| Property | Description |
|---|---|
|
The location of the JSON-based flow configuration file. The default value is |
|
Specifies whether Clockspring creates a backup copy of the flow automatically when the flow is updated. The default value is |
|
The location of the archive directory where backup copies of the flow.json are saved. The default value is |
|
The lifespan of archived flow.json files. Clockspring will delete expired archive files when it updates flow.json if this property is specified. Expiration is determined based on current system time and the last modified timestamp of an archived flow.json. If no archive limitation is specified in clockspring.properties, Clockspring removes archives older than |
|
The total data size allowed for the archived flow.json files. Clockspring will delete the oldest archive files until the total archived file size becomes less than this configuration value, if this property is specified. If no archive limitation is specified in clockspring.properties, Clockspring uses |
|
The number of archive files allowed. Clockspring will delete the oldest archive files so that only N latest archives can be kept, if this property is specified. |
|
Indicates whether -upon restart- the components on the Clockspring graph should return to their last state. The default value is |
|
Indicates the shutdown period. The default value is |
|
When many changes are made to the flow.json, this property specifies how long to wait before writing out the changes, so as to batch the changes into a single write. The default value is |
|
If a component allows an unexpected exception to escape, it is considered a bug. As a result, the framework will pause (or administratively yield) the component for this amount of time. This is done so that the component does not use up massive amounts of system resources, since it is known to have problems in the existing state. The default value is |
|
When a component has no work to do (i.e., is "bored"), this is the amount of time it will wait before checking to see if it has new data to work on. This way, it does not use up CPU resources by checking for new work too often. When setting this property, be aware that it could add extra latency for components that do not constantly have work to do, as once they go into this "bored" state, they will wait this amount of time before checking for more work. The default value is |
|
When drawing a new connection between two components, this is the default value for that connection’s back pressure object threshold. The default is |
|
When drawing a new connection between two components, this is the default value for that connection’s back pressure data size threshold. The default is |
|
This is the location of the file that specifies how authorizers are defined. The default value is |
|
This is the location of the file that specifies how username/password authentication is performed. This file is
only considered if |
|
The location of the nar library. The default value is |
|
The location that certain providers (e.g. UserGroupProviders) will look for previous configurations to restore from. There is no default value. |
|
Allows for an administrator-defined html-formatted message to be pinned to the top of the screen. |
|
HTML to display in a consent/monitoring dialog which must be accepted by the user before access is granted |
|
Flag to toggle whether Clockspring should run in FIPS mode. Default is false
|
|
The location of the nar working directory. The default value is |
|
If set to |
|
Time to wait for a Processor’s life-cycle operation ( |
State Management
The State Management section of the Properties file provides a mechanism for configuring local and cluster-wide mechanisms for components to persist state. See the State Management section for more information on how this is used.
| Property | Description |
|---|---|
|
The XML file that contains configuration for the local and cluster-wide State Providers. The default value is |
|
The ID of the Local State Provider to use. This value must match the value of the |
|
The ID of the Cluster State Provider to use. This value must match the value of the |
|
Specifies whether or not this instance of Clockspring should start an embedded ZooKeeper Server. This is used in conjunction with the ZooKeeperStateProvider. The default value is |
|
Specifies a properties file that contains the configuration for the embedded ZooKeeper Server that is started (if the |
Database Settings
The Database Settings section defines the settings for the internal database, which tracks flow configuration history.
| Property | Description |
|---|---|
|
The location of the Flow Configuration History database directory. The default value is |
Flow Action Reporter
The Flow Action Reporter is a framework interface that supports exporting flow configuration changes using a custom implementation class.
Property |
Description |
|
The class implementing |
FlowFile Repository
The FlowFile repository keeps track of the attributes and current state of each FlowFile in the system.
There are currently three implementations of the FlowFile Repository, which are detailed below.
| Property | Description |
|---|---|
|
The FlowFile Repository implementation. The default value is |
| Switching repository implementations should only be done on an instance with zero queued FlowFiles, and should only be done with caution. |
Write Ahead FlowFile Repository
WriteAheadFlowFileRepository is the default implementation. It persists FlowFiles to disk, and can optionally be configured to synchronize all changes to disk. This is very expensive and can significantly reduce Clockspring performance. However, if it is false, there could be the potential for data loss if either there is a sudden power loss or the operating system crashes. The default value is false.
| Property | Description |
|---|---|
|
If the repository implementation is configured to use the |
|
The location of the FlowFile Repository. The default value is |
|
The FlowFile Repository checkpoint interval. The default value is |
|
If set to |
Volatile FlowFile Repository
This implementation stores FlowFiles in memory instead of on disk. It will result in data loss in the event of power/machine failure or a restart of Clockspring. To use this implementation, set nifi.flowfile.repository.implementation to org.apache.nifi.controller.repository.VolatileFlowFileRepository.
Swap Management
Clockspring keeps FlowFile information in memory (the JVM) but during surges of incoming data, the FlowFile information can start to take up so much of the JVM that system performance suffers. To counteract this effect, Clockspring "swaps" the FlowFile information to disk temporarily until more JVM space becomes available again. These properties govern how that process occurs.
| Property | Description |
|---|---|
|
The Swap Manager implementation. The default value is |
|
The queue threshold at which Clockspring starts to swap FlowFile information to disk. The default value is |
| When a queue begins swapping to disk, Clockspring does not guarantee that all the FlowFiles in the queue are sorted in the order specified by the prioritizers configured on the queue. New FlowFiles arriving at the queue are written to the swap file without considering prioritizers. They are prioritized when the swap file is read back into memory. |
Content Repository
The Content Repository holds the content for all the FlowFiles in the system.
| Property | Description |
|---|---|
|
The Content Repository implementation. The default value is |
File System Content Repository Properties
| Property | Description |
|---|---|
|
The Content Repository implementation. The default value is |
|
When Clockspring processes many small FlowFiles, the contents of those FlowFiles are stored in the content repository, but we do not store the content of each
individual FlowFile as a separate file in the content repository. Doing so would be very detrimental to performance, if each 120 byte FlowFile, for instance, was written to its own file. Instead,
we continue writing to the same file until it reaches some threshold. This property configures that threshold. Setting the value too small can result in poor performance due to reading from and
writing to too many files. However, a file can only be deleted from the content repository once there are no longer any FlowFiles pointing to it. Therefore, setting the value too large can result
in data remaining in the content repository for much longer, potentially leading to the content repository running out of disk space. The default value is |
|
The location of the Content Repository. The default value is |
|
If archiving is enabled (see |
|
If archiving is enabled (see |
|
To enable content archiving, set this to |
|
If set to |
|
The frequency with which to schedule the content archive clean up task. The default value is |
Provenance Repository
The Provenance Repository contains the information related to Data Provenance. The next four sections are for Provenance Repository properties.
| Property | Description |
|---|---|
|
The Provenance Repository implementation. The default value is
|
|
The maximum number of events that should be written to a single event file before the file is rolled over. The default value is |
Write Ahead Provenance Repository Properties
| Property | Description |
|---|---|
|
The location of the Provenance Repository. The default value is |
|
The maximum amount of time to keep data provenance information. The default value is |
|
The maximum amount of data provenance information to store at a time.
The default value is |
|
The amount of data to write to a single "event file." The default value is |
|
The number of threads to use for Provenance Repository queries. The default value is |
|
The number of threads to use for indexing Provenance events so that they are searchable. The default value is |
|
Indicates whether to compress the provenance information when an "event file" is rolled over. The default value is |
|
If set to |
|
This is a comma-separated list of the fields that should be indexed and made searchable.
Fields that are not indexed will not be searchable. Valid fields are: |
|
This is a comma-separated list of FlowFile Attributes that should be indexed and made searchable. It is blank by default.
But some good examples to consider are |
|
The repository uses Apache Lucene to performing indexing and searching capabilities. This value indicates how large a Lucene Index should
become before the Repository starts writing to a new Index. Large values for the shard size will result in more Java heap usage when searching the Provenance Repository but should
provide better performance. The default value is NOTE: This value should be smaller than (no more than half of) the |
|
Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from the repository.
If the length of any attribute exceeds this value, it will be truncated when the event is retrieved. The default value is |
|
Apache Lucene creates several "segments" in an Index. These segments are periodically merged together in order to provide faster
querying. This property specifies the maximum number of threads that are allowed to be used for each of the storage directories. The default value is |
|
Each time that a Provenance query is run, the query must first search the Apache Lucene indices (at least, in most cases - there are some queries that are run often and the results are cached to avoid searching the Lucene indices). When a Lucene index is opened for the first time, it can be very expensive and take several seconds. This is compounded by having many different indices, and can result in a Provenance query taking much longer. After the index has been opened, the Operating System’s disk cache will typically hold onto enough data to make re-opening the index much faster - at least for a period of time, until the disk cache evicts this data. If this value is set, Clockspring will periodically open each Lucene index and then close it, in order to "warm" the cache. This will result in far faster queries when the Provenance Repository is large. As with all great things, though, it comes with a cost. Warming the cache does take some CPU resources, but more importantly it will evict other data from the Operating System disk cache and will result in reading (potentially a great deal of) data from the disk. This can result in lower Clockspring performance. However, if Clockspring is running in an environment where CPU and disk are not fully utilized, this feature can result in far faster Provenance queries. The default value for this property is blank (i.e. disabled). |
Persistent Provenance Repository Properties
| Property | Description |
|---|---|
|
The location of the Provenance Repository. The default value is |
|
The maximum amount of time to keep data provenance information. The default value is |
|
The maximum amount of data provenance information to store at a time. The default value is |
|
The amount of time to wait before rolling over the latest data provenance information so that it is available in the User Interface. The default value is |
|
The amount of information to roll over at a time. The default value is |
|
The number of threads to use for Provenance Repository queries. The default value is |
|
The number of threads to use for indexing Provenance events so that they are searchable. The default value is |
|
Indicates whether to compress the provenance information when rolling it over. The default value is |
|
If set to |
|
The number of journal files that should be used to serialize Provenance Event data. Increasing this value will allow more tasks to simultaneously update the repository but will result in more expensive merging of the journal files later. This value should ideally be equal to the number of threads that are expected to update the repository simultaneously, but 16 tends to work well in must environments. The default value is |
|
This is a comma-separated list of the fields that should be indexed and made searchable. Fields that are not indexed will not be searchable. Valid fields are: |
|
This is a comma-separated list of FlowFile Attributes that should be indexed and made searchable. It is blank by default. But some good examples to consider are |
|
Large values for the shard size will result in more Java heap usage when searching the Provenance Repository but should provide better performance. The default value is |
|
Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from the repository. If the length of any attribute exceeds this value, it will be truncated when the event is retrieved. The default value is |
Volatile Provenance Repository Properties
| Property | Description |
|---|---|
|
The Provenance Repository buffer size. The default value is |
Status History Repository
The Status History Repository contains the information for the Component Status History and the Node Status History tools in the User Interface. The following properties govern how these tools work.
| Property | Description |
|---|---|
|
The Status History Repository implementation. The default value is |
|
This value indicates how often to capture a snapshot of the components' status history. The default value is |
In memory repository
If the value of the property nifi.components.status.repository.implementation is VolatileComponentStatusRepository, the
status history data will be stored in memory. If the application stops, all gathered information will be lost.
The buffer.size and snapshot.frequency work together to determine the amount of historical data to retain. As an example, to
configure two days' worth of historical data with a data point snapshot occurring every 5 minutes you would configure
snapshot.frequency to be "5 mins" and the buffer.size to be "576". To further explain this example, for every 60 minutes there
are 12 (60 / 5) snapshot windows for that time period. To keep that data for 48 hours (12 * 48) you end up with a buffer size
of 576.
| Property | Description |
|---|---|
|
Specifies the buffer size for the Status History Repository. The default value is |
Persistent repository
If the value of the property nifi.components.status.repository.implementation is org.apache.nifi.controller.status.history.questdb.EmbeddedQuestDbStatusHistoryRepository, the
status history data will be stored to the disk in a persistent manner. Data will be kept between restarts. In order to use persistent repository, the QuestDB NAR must be re-built with the include-questdb profiles enabled.
| Property | Description |
|---|---|
|
The number of days the node status data (such as Repository disk space free, garbage collection information, etc.) will be kept. The default values
is |
|
The number of days the component status data (i.e., stats for each Processor, Connection, etc.) will be kept. The default value is |
|
The location of the persistent Status History Repository. The default value is |
|
The location of the database backup in case the database is being corrupted and recreated. The default value is |
|
The QuestDb based status history repository persists the collected status information in batches. The batch size determines the maximum number of persisted status records at a given time. The default value is |
|
The frequency of persisting collected status records. The default value is |
Site to Site Properties
These properties govern how this instance of Clockspring communicates with remote instances of Clockspring when Remote Process Groups are configured in the dataflow.
Remote Process Groups can choose transport protocol from RAW and HTTP. Properties named with nifi.remote.input.socket.* are RAW transport protocol specific. Similarly, nifi.remote.input.http.* are HTTP transport protocol specific properties.
| Property | Description |
|---|---|
|
The host name that will be given out to clients to connect to this Clockspring instance for Site-to-Site communication. By default, it is the value from |
|
This indicates whether communication between this instance of Clockspring and remote Clockspring instances should be secure. By default, it is set to |
|
The remote input socket port for Site-to-Site communication. By default, it is blank, but it must have a value in order to use RAW socket as transport protocol for Site-to-Site. |
|
Specifies whether HTTP Site-to-Site should be enabled on this host. By default, it is set to |
|
Specifies how long a transaction can stay alive on the server. By default, it is set to |
|
Specifies how long Clockspring should cache information about a remote Clockspring instance when communicating via Site-to-Site. By default, Clockspring will cache the |
Web Properties
These properties pertain to the web-based User Interface.
| Property | Description |
|---|---|
|
The HTTP host. The default value is blank. |
|
The HTTP port. The default value is blank. |
|
The port which forwards incoming HTTP requests to |
|
The name of the network interface to which Clockspring should bind for HTTP requests. It is blank by default. |
|
The HTTPS host. The default value is |
|
The HTTPS port. The default value is |
|
Same as |
|
Cipher suites used to initialize the SSLContext of the Jetty HTTPS port. If unspecified, the runtime SSLContext defaults are used. |
|
Cipher suites that may not be used by an SSL client to establish a connection to Jetty. If unspecified, the runtime SSLContext defaults are used. In Chrome, the SSL cipher negotiated with Jetty may be examined in the 'Developer Tools' plugin, in the 'Security' tab. In Firefox, the SSL cipher negotiated with Jetty may be examined in the 'Secure Connection' widget found to the left of the URL in the browser address bar. |
|
The name of the network interface to which Clockspring should bind for HTTPS requests. It is blank by default. |
|
The space-separated list of application protocols supported when running with HTTPS enabled. The default value is The value can be set to The value can be set to |
|
The location of the Jetty working directory. The default value is |
|
The number of Jetty threads. The default value is |
|
The maximum size allowed for request and response headers. The default value is |
|
A comma separated list of allowed HTTP Host header values to consider when Clockspring is running securely and will be receiving requests to a different host[:port] than it is bound to. For example, when running in a Docker container or behind a proxy (e.g. localhost:18443, proxyhost:443). By default, this value is blank meaning Clockspring should only allow requests sent to the host[:port] that Clockspring is bound to. Requests containing an invalid port in the Host or authority header return an HTTP 421 Misdirected Request status. |
|
A comma separated list of allowed HTTP X-ProxyContextPath, X-Forwarded-Context, or X-Forwarded-Prefix header values to consider. By default, this value is blank meaning all requests containing a proxy context path are rejected. Configuring this property would allow requests where the proxy path is contained in this listing. |
|
The maximum size (HTTP |
|
The maximum number of requests from a connection per second. Requests in excess of this are first delayed, then throttled. |
|
The maximum number of requests for login Access Tokens from a connection per second. Requests in excess of this are rejected with HTTP 429. |
|
A comma separated list of IP addresses. Used to specify the IP addresses of clients which can exceed the maximum requests per second ( |
|
The request timeout for web requests. Requests running longer than this time will be forced to end with a HTTP 503 Service Unavailable response. Default value is |
|
The parameterized format for HTTP request log messages. The format property supports the modifiers and codes described in the Jetty CustomRequestLog. The default value uses the Combined Log Format, which follows the
Common Log Format with the addition of
The CustomRequestLog writes formatted messages using the following SLF4J logger:
|
|
The regular expression controlling the JMX MBean names that the REST API
is allowed to return. The default value is empty, blocking all MBeans. Configuring |
Security Properties
These properties pertain to various security features in Clockspring. Many of these properties are covered in more detail in the Security Configuration section of this Administrator’s Guide.
| Property | Description |
|---|---|
|
This is the password used to encrypt any sensitive property values that are configured in processors. By default, it is blank, but the system administrator should provide a value for it. It can be a string of any length, although the recommended minimum length is 10 characters. Be aware that once this password is set and one or more sensitive processor properties have been configured, this password should not be changed. |
|
The algorithm used to encrypt sensitive properties. The default value is |
|
Specifies whether the SSL context factory should be automatically reloaded if updates to the keystore and truststore are detected. By default, it is set to |
|
Specifies the interval at which the keystore and truststore are checked for updates. Only applies if |
|
The full path and name of the keystore. It is blank by default. |
|
The keystore type. It is blank by default. |
|
The keystore password. It is blank by default. |
|
The key password. It is blank by default. |
|
The full path and name of the truststore. It is blank by default. |
|
The truststore type. It is blank by default. |
|
The truststore password. It is blank by default. |
|
Specifies which of the configured Authorizers in the authorizers.xml file to use. By default, it is set to |
|
Whether anonymous authentication is allowed when running over HTTPS. If set to true, client certificates are not required to connect via TLS. |
|
This indicates what type of login identity provider to use. The default value is blank, can be set to the identifier from a provider
in the file specified in |
|
This is the URL for the Online Certificate Status Protocol (OCSP) responder if one is being used. It is blank by default. |
|
This is the location of the OCSP responder certificate if one is being used. It is blank by default. |
Identity Mapping Properties
These properties can be utilized to normalize user identities. When implemented, identities authenticated by different identity providers (certificates, LDAP, Kerberos) are treated the same internally in Clockspring. As a result, duplicate users are avoided and user-specific configurations such as authorizations only need to be setup once per user.
The following examples demonstrate normalizing DNs from certificates and principals from Kerberos:
nifi.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?), O=(.*?), L=(.*?), ST=(.*?), C=(.*?)$ nifi.security.identity.mapping.value.dn=$1@$2 nifi.security.identity.mapping.transform.dn=NONE nifi.security.identity.mapping.pattern.kerb=^(.*?)/instance@(.*?)$ nifi.security.identity.mapping.value.kerb=$1@$2 nifi.security.identity.mapping.transform.kerb=NONE
The last segment of each property is an identifier used to associate the pattern with the replacement value. When a user makes a request to Clockspring, their identity is checked to see if it matches each of those patterns in lexicographical order. For the first one that matches, the replacement specified in the nifi.security.identity.mapping.value.xxxx property is used. So a login with CN=localhost, OU=Clockspring, O=Clockspring, L=Oxon Hill, ST=MD, C=US matches the DN mapping pattern above and the DN mapping value $1@$2 is applied.
In addition to mapping, a transform may be applied. The supported versions are NONE (no transform applied), LOWER (identity lowercased), and UPPER (identity uppercased). If not specified, the default value is NONE.
| These mappings are also applied to the "Initial Admin Identity", "Cluster Node Identity", and any legacy users in the authorizers.xml file as well as users imported from LDAP (See Authorizers.xml Setup). |
Group names can also be mapped. The following example will accept the existing group name but will lowercase it. This may be helpful when used in conjunction with an external authorizer.
nifi.security.group.mapping.pattern.anygroup=^(.*)$ nifi.security.group.mapping.value.anygroup=$1 nifi.security.group.mapping.transform.anygroup=LOWER
| These mappings are applied to any legacy groups referenced in the authorizers.xml as well as groups imported from LDAP. |
Cluster Common Properties
When setting up a cluster, these properties should be configured the same way on all nodes.
| Property | Description |
|---|---|
|
The interval at which nodes should emit heartbeats to the Cluster Coordinator. The default value is |
|
Maximum number of heartbeats a Cluster Coordinator can miss for a node in the cluster before the Cluster Coordinator updates the node status to Disconnected. The default value is |
|
This indicates whether cluster communications are secure. The default value is |
Cluster Node Properties
Configure these properties for cluster nodes.
| Property | Description |
|---|---|
|
Set this to |
|
The Cluster Leader Election implementation class name or simple class name. The default value is The implementation can be set to |
|
The prefix string applied to Kubernetes Leases created for tracking cluster leader election. Configuring a prefix is necessary when running more than one Clockspring cluster in the same Kubernetes Namespace. The default value is blank. |
|
The fully qualified address of the node. It is blank by default. |
|
The node’s protocol port. It is blank by default. |
|
The maximum number of threads that should be used to communicate with other nodes in the cluster. This property defaults to |
|
When the state of a node in the cluster is changed, an event is generated
and can be viewed in the Cluster page. This value indicates how many events to keep in memory for each node. The default value is |
|
When connecting to another node in the cluster, specifies how long this node should wait before considering
the connection a failure. The default value is |
|
When communicating with another node in the cluster, specifies how long this node should wait to receive information
from the remote node before considering the communication with the node a failure. The default value is |
|
The maximum number of outstanding web requests that can be replicated to nodes in the cluster. If this number of requests is exceeded, the embedded Jetty server will return a "409: Conflict" response. This property defaults to |
|
The location of the node firewall file. This is a file that may be used to list all the nodes that are allowed to connect to the cluster. It provides an additional layer of security. This value is blank by default, meaning that no firewall file is to be used. See Cluster Firewall Configuration for file format details. |
|
Specifies the amount of time to wait before electing a Flow as the "correct" Flow. If the number of Nodes that have voted is equal to the number specified
by the |
|
Specifies the number of Nodes required in the cluster to cause early election of Flows. This allows the Nodes in the cluster to avoid having to wait a long time before starting processing if we reach at least this number of nodes in the cluster. |
|
Specifies the port to listen on for incoming connections for load balancing data across the cluster. The default value is |
|
Specifies the hostname to listen on for incoming connections for load balancing data across the cluster. If not specified, will default to the value used by the
|
|
The maximum number of connections to create between this node and each other node in the cluster. For example, if there are 5 nodes in the cluster and this value is set to 4, there will be up to 20 socket connections established for load-balancing purposes (5 x 4 = 20). The default value is |
|
The maximum number of threads to use for transferring data from this node to other nodes in the cluster. While a given thread can only write to a single socket at a time, a single thread is capable of servicing multiple connections simultaneously because a given connection may not be available for reading/writing at any given time. The default value is NOTE: Increasing this value will allow additional threads to be used for communicating with other nodes in the cluster and writing the data to the Content and FlowFile Repositories. However, if this property is set to a value greater than the number of nodes in the cluster multiplied by the number of connections per node ( |
|
When communicating with another node, if this amount of time elapses without making any progress when reading from or writing to a socket, then a TimeoutException will be thrown. This will then result in the data either being retried or sent to another node in the cluster, depending on the configured Load Balancing Strategy. The default value is |
ZooKeeper Properties
Clockspring depends on Apache ZooKeeper for determining which node in the cluster should play the role of Primary Node and which node should play the role of Cluster Coordinator. These properties must be configured in order for Clockspring to join a cluster.
| Property | Description |
|---|---|
|
The Connect String that is needed to connect to Apache ZooKeeper. This is a comma-separated list
of hostname:port pairs. For example, |
|
How long to wait when connecting to ZooKeeper before considering the connection a failure. The default value is |
|
How long to wait after losing a connection to ZooKeeper before the session is expired. The default value is |
|
The root ZNode that should be used in ZooKeeper. ZooKeeper provides a directory-like structure
for storing data. Each 'directory' in this structure is referred to as a ZNode. This denotes the root ZNode, or 'directory',
that should be used for storing data. The default value is |
|
Whether to acccess ZooKeeper using client TLS. |
|
Filename of the Keystore containing the private key to use when communicating with ZooKeeper. |
|
Optional. The type of the Keystore. Must be |
|
The password for the Keystore. |
|
Filename of the Truststore that will be used to verify the ZooKeeper server(s). |
|
Optional. The type of the Truststore. Must be |
|
The password for the Truststore. |
|
Maximum buffer size in bytes for packets sent to and received from ZooKeeper.
Defaults to The ZooKeeper Administrator’s Guide categorizes this property as an unsafe option.
Changing this property requires setting |
Kerberos Properties
| Property | Description |
|---|---|
|
The location of the krb5 file, if used. It is blank by default. At this time, only a single krb5 file is allowed to
be specified per Clockspring instance, so this property is configured here to support service principals rather than in individual Processors.
If necessary the krb5 file can support multiple realms.
Example: |
|
The name of the Clockspring Kerberos service principal, if used. It is blank by default. Note that this property is for Clockspring to authenticate as a client other systems.
Example: |
|
The file path of the Clockspring Kerberos keytab, if used. It is blank by default. Note that this property is for Clockspring to authenticate as a client other systems.
Example: |
Analytics Properties
These properties determine the behavior of the internal Clockspring predictive analytics capability, such as backpressure prediction, and should be configured the same way on all nodes.
| Property | Description |
|---|---|
|
This indicates whether prediction should be enabled for the cluster. The default is |
|
The time interval for which analytical predictions (e.g. queue saturation) should be made. The default value is |
|
The time interval to query for past observations (e.g. the last 3 minutes of snapshots). The default value is |
|
The implementation class for the status analytics model used to make connection predictions. The default value is |
|
The name of the scoring type that should be used to evaluate the model. The default value is |
|
The threshold for the scoring value (where model score should be above given threshold). The default value is |
Runtime Monitoring Properties
Long-Running Task Monitor periodically checks the Clockspring processor executor threads and produces warning logs and bulletin messages for those that have been running for a longer period of time.
It can be used to detect possibly stuck / hanging processor tasks.
Please note the performance impact of the task monitor: it creates a thread dump for every run that may affect the normal flow execution.
The Long-Running Task Monitor can be disabled via defining no values for its properties, and it is disabled by default.
To enable it, both nifi.monitor.long.running.task.schedule and nifi.monitor.long.running.task.threshold properties need to be configured with valid time periods.
| Property | Description |
|---|---|
|
The time period between successive executions of the Long-Running Task Monitor (e.g. |
|
The time period beyond which a task is considered long-running, i.e. stuck / hanging (e.g. |
Performance Tracking Properties
Clockspring exposes a very significant number of metrics by default through the User Interface. However, there are sometimes additional metrics that may add in diagnosing bottlenecks and improving the performance of the dataflow.
The nifi.performance.tracking.percentage property can be used to enable the tracking of additional metrics. Gathering these metrics, however, require system calls, which can be
expensive on some systems. As a result, this property defaults to a value of 0, indicating that the metrics should be captured 0% of the time. I.e., the feature is disabled by
default. To enable this feature, set the value of this property to an integer value in the range of 0 to 100, inclusive. This represents what percentage of the time Clockspring should
gather these metrics.
For example, if the value is set to 20, then Clockspring will gather these metrics for each processor approximately 20% of the times that the Processor is run. The remainder of the time, it will use the values that it has already captured in order to extrapolate the metrics to additional runs.
The metrics that are gathered include what percentage of the time the processor is utilizing the CPU (versus waiting for I/O to complete or blocking due to monitor/lock contention), what percentage of time the Processor spends reading from the Content Repository, writing to the Content Repository, blocked due to Garbage Collection, etc.
So, continuing our example, if we set the value of the nifi.performance.tracking.percentage and a processor is triggered to run 1,000 times, then Clockspring will measure how much CPU
time was consumed over the 200 iterations during which it was measured (i.e., 20% of 1,000). Let’s say that this amounts to 500 milliseconds of CPU time. Additionally, let’s consider
that the Processor took 5,000 milliseconds to complete those 200 invocations because most of the time was spent blocking on Socket I/O. From this, Clockspring will calculate that the CPU
is used approximately 10% of the time (500 / 5,000 * 100%). Now, let’s consider that in order to complete all 1,000 invocations the Processor took 35 seconds. Clockspring will calculate,
then, that the Processor has used approximately 3.5 seconds (or 3500 milliseconds) of CPU time.
As a result, if we set the value of this property higher, up to a value of 100, we will get more accurate results. However, it may be more expensive to monitor.
In order to view these metrics, we can gather diagnostics by running the command clockspring.sh diagnostics <filename> and inspecting the generated file. See Diagnostics for more information.
Upgrading Clockspring
| All nodes in a cluster must be upgraded to the same version as nodes with different versions are not supported in the same cluster. |
Clear Activity and Shutdown Existing Clockspring
On your existing Clockspring installation:
-
Stop all the source processors to prevent the ingestion of new data.
-
Allow Clockspring to run until there is no active data in any of the queues in the dataflow(s).
-
Shutdown your existing Clockspring instance(s).
Install the new Clockspring Version
Install the new Clockspring into a directory parallel to the existing Clockspring installation.
-
Download the latest version Clockspring.
-
Install the rpm
-
If you are upgrading a Clockspring cluster, repeat these steps on each node in the cluster.
In your upgraded installation:
-
Start your new instance.
-
Verify that:
-
All your dataflows have returned to a running state. Some processors may have new properties that need to be configured, in which case they will be stopped and marked Invalid (
). -
All your expected controller services and reporting tasks are running again. Address any controller services or reporting tasks that are marked Invalid (
).
-
Diagnostics
It is possible to get diagnostics data from a Clockspring node by executing the below command:
$ ./bin/clockspring.sh --diagnostics --verbose <dumpfilePath>
During the diagnostic, Clockspring sends a request to an already running Clockspring instance, which collects information about clusters, components, part of the configuration, memory usage, etc., and writes it to the specified file or, failing that, to the logs.
The verbose switch is optional and can be used to control the level of diagnostic detail. In case of a missing dump file path, Clockspring writes the diagnostics information to the bootstrap.log file.
Automatic diagnostics on restart and shutdown
Clockspring can be configured to automatically execute the diagnostics command in the event of a shutdown. The feature is disabled by default and can be enabled with the nifi.diagnostics.on.shutdown.enabled property in the clockspring.properties configuration file. It is also possible to configure where the files should be stored and how many files should be kept using the below properties:
| Property | Description |
|---|---|
|
(true or false) This property decides whether to run Clockspring diagnostics before shutting down. The default value is |
|
(true or false) This property decides whether to run Clockspring diagnostics in verbose mode. The default value is |
|
This property specifies the location of the Clockspring diagnostics directory. The default value is |
|
This property specifies the maximum permitted number of diagnostic files. If the limit is exceeded, the oldest files are deleted. The default value is |
|
This property specifies the maximum permitted size of the diagnostics directory. If the limit is exceeded, the oldest files are deleted. The default value is |
In the case of a lengthy diagnostic, Clockspring may terminate before the command execution ends. In this case, the graceful.shutdown.seconds property should be set to a higher value in the bootstrap.conf configuration file.
Automatic heap dump on Out of Memory Errors
It is possible to set properties in bootstrap.conf to configure Clockspring to generate a heap dump when an Out of Memory (OOM) error occurs. This can be helpful to analyze for memory leaks. An example of properties to be added to bootstrap.conf follows:
java.arg.heapDumpPath=-XX:HeapDumpPath=./work java.arg.heapDumpOnOutOfMemory=-XX:+HeapDumpOnOutOfMemoryError
These property values (as set in the example) will cause a heap dump to be generated into the ./work directory. The location of the heap dump is configurable by changing the location of the -XX:HeapDumpPath= argument.
JMX Metrics
It is possible to get JMX metrics using the REST API with read permissions on system diagnostics resources.
The information available depends on the registered MBeans. Metrics can contain data related to performance indicators.
Listing of MBeans is controlled using a regular expression pattern in application properties. Leaving the property empty means no MBeans will be returned. The default value blocks all MBeans and must be changed to return information.
nifi.web.jmx.metrics.allowed.filter.pattern=.*
An optionally provided query parameter using a regular expression pattern, will display only MBeans with matching names. Leaving this parameter empty means listing all MBeans except those filtered out by the blocked filter pattern.
https://localhost:8443/nifi-api/system-diagnostics/jmx-metrics?beanNameFilter=bean.name.1|bean.name.2
An example output would look like this:
[
{
"beanName" : "bean.name.1,type=type1",
"attributeName" : “attribute-name",
"attributeValue" : “attribute-value”
},
{
"beanName" : "bean.name.2, type=type2",
"attributeName" : "attribute-name",
"attributeValue" : integer-value
}
]