# RapidIdentity Administrators' and Users' Guide

##### Studio Applications - Providers & Consumers

Applications are how RapidIdentity Studio imports Data Provider information and exports Data Consumer information. Currently, the most efficient way to set up a Data Provider or a Data Consumer is by importing one from the Studio Catalog.

Once the application has been installed from the Catalog, click Configure to view or update the details.

There are three major sections that should be configured: Connection Settings, Record Definitions, and Record Mappings.

1. When all of the configuration settings have been verified for Provider applications, run the associated Provider Jobs to stage the imported data into the Provider namespace.

2. After the Provider job has completed successfully, run the associated Metaverse Jobs to construct the data into the Metaverse Namespace.

3. Consumer Jobs are run after the Consumer Application has been configured, and are required to construct the data in the Consumer namespace.

###### Connection Settings

The Connection Settings section has configuration elements that will control access to the Provider or connection with the Consumer. You'll need to verify or update the Type, Configuration, and System Credentials on this screen.

###### Type

The Connection Type is defined when the application is created and is displayed here, but is not modifiable at this screen. There are four Connection Types:

• Delimited Text

• Web Service Client

• OneRoster

### Note

If the type of connection is OneRoster, the Type menu will not be visible, and a OneRoster Manifest option will appear.

• EdSync

###### Configuration

The settings here will vary depending on the Connection Type defined for the application, and the settings within each menu vary further depending on the communication method chosen. Below are some common fields that need to be defined for the given options.

Table 25. Delimited Text File Configuration Inputs

Field

Description

Protocol

File transfer protocol to use. The options for Delimited Text connections are:

• SMB

• S3

• SFTP

• FTP

• FTPS

Host

The DNS hostname or IP address of the server hosting the data

### Note

For S3, this can include a region or endpoint as a prefix value followed by a colon.

Port

Port number if different from the standard port number defined for the protocol.

### Note

To use the standard port number for the protocol (e.g. port 22 for SFTP), specify -1, which is the default value.

Path

Path to the file location

User Directory is Root

Check this box if the user directory is the root directory on the server

### Note

This is only used for SFTP, FTPS, and FTP.

Timeout (MS)

Set various timeout settings in Milliseconds

### Note

For Delimited Text, the different timeout settings apply as follows:

• Socket Timeout: Applies to SFTP, FTP, FTPS, and S3

• Connection Timeout: Applies to FTP, FTPS, S3

• Data Timeout: Applies to FTP, FTPS, and S3

Character Set

The character set used in the text file

### Note

RapidIdentity Studio supports any character set supported by Java. However, the most commonly used are UTF-8, ISO-8869-1, and CP1250.

Field Separator*

The character used to separate fields in the file. Default character is a comma (,)

### Note

This should be changed to a different character, such as a pipe (|) for .csv source files.

Quote Character*

The character(s) used to set apart quoted strings. Default character is a backslash followed by a quotation mark (\")

Escape Character*

The character(s) used as an escape. Default character is null (no value)

### *

The provided strings for Field Separator, Quote Character, and Escape Character should be escaped according to Java's String Escape Rules.

Quote Handling

Define the behavior by which quotes will be printed. Default method is Minimal

Trust All Certificates

Check this box if you want to trust all TLS or SSH certificates passed to the server

Trust Self-Signed Certificates

Check this box to trust any self-signed TLS certificates passed to the server

### Note

This only applies to FTPS connections

Extra Properties

Click Add Another Extra Property to include a name-value pair of JCIFS-NG properties. Multiple pairs may be added as needed

### Note

This only applies to SMB connections

Table 26. Web Service Client Configuration Inputs

Field

Description

Base URL

The full URL of the Web Service Client hosting the data

Query Parameters

The value pairs for a query string, if applicable

The value pairs for HTTP headers, if applicable

Trust All Certificates

Trust all TLS certificates passed to the server when checked

###### OneRoster Manifest Configuration

The OneRoster Manifest is only applicable for CSV Consumers.

###### System Credentials

The System Credentials will also vary depending on the Connection Type, and the inputs vary by Credential Type. Enter the Credential Type, then fill out the configuration and credential information for the rest of the fields. Each Credential Type has its own set of requirements as noted in the table below.

Table 27. System Credentials Requirements

Credential Type

Credential Field

Description

OAuth1 One Legged

Signature Method

Choose signature method to use from:

• Plaintext

• HMAC SHA1

• HMAC SHA256

• RSA SHA1

Consumer Key

Enter the consumer key generated by the third party

### Note

For EdSync and OneRoster Web Services Consumers, you will instead Generate a Consumer Key

Consumer Secret

Enter the consumer secret generated by the third party

### Note

For EdSync and OneRoster Web Services Consumers, you will instead Generate a Consumer Secret

Token Secret

Enter the token secret generated by the third party

Callback URL

Enter the URL to be navigated to once the process has completed

OAuth1 Two Legged

Signature Method

Choose signature method to use from:

• Plaintext

• HMAC SHA1

• HMAC SHA256

• RSA SHA1

Consumer Key

Enter the consumer key generated by the third party

Consumer Secret

Enter the consumer secret generated by the third party

Request Token URL

Callback URL

Enter the URL to be navigated to once the process has completed

OAuth2 Bearer Only

Bearer Token

Enter the string representing the bearer token

Client ID

Enter the client ID set for the resource

Client Secret

Enter the client secret associated with the ID

Username needed to access the resource

Password needed to access the resource

Requested Scopes

Enter any scope information needed to limit access

Token Revocation URL

OAuth Client Credentials

Client ID

Enter the client ID set for the resource

Client Secret

Enter the client secret associated with the ID

Requested Scopes

Enter any scope information needed to limit access

Token Revocation URL

OAuth2 JWT Bearer

Client ID

Enter the client ID set for the resource

Private Key

Enter the private key generated by the third party

Enter the username for the related private or public proxy

Passcode for Private Key

Enter the passcode required for the private key

Requested Scopes

Enter any scope information needed to limit access

Token Revocation URL

OAuth2 Authorization Code

Client ID

Enter the client ID set for the resource

Client Secret

Enter the client secret associated with the ID

Enter the username for the related private or public proxy

Authorization Code

Enter the code needed to authorize the transaction

Requested Scopes

Enter any scope information needed to limit access

Authorization URL

Enter the URL for the authorization

Token Revocation URL

Redirect URL

Enter the URL to redirect to once the process has been completed

AWS

Access Key

Enter the AWS access key

Secret Key

Enter the secret key associated with the access key

STS Role ARN

The ARN of the IAM role to assume (using STS) during S3 operations

Enter the username credential to access the resource

Private Key

Public Key

Enter the public key generated by the third party

Private Key

Enter the private key generated by the third party

Passcode for Private Key

(Optional) Enter the passcode needed for the private key

API Key

API Key

Enter the API key for the resource

###### Studio JMESPath Usage

JMESPath (pronounced like "James Path") is an expression language for extracting and transforming JSON data. Studio Provider connections use JMESPath to extract data from responses to web service requests.

https://jmespath.org/ is the website where the JMESPath standard lives, and it also contains other resources such as a tutorial, examples, and a simple app to experiment with JMESPath expressions.

There are currently 3 different types of JMESPath expressions that may need to be provided to Studio:

1. Studio > Applications > (Your WS_CLIENT Application) > Record Definitions > (Your Record Definition) > Details > JMESPath Records Selector (required)

2. Studio > Applications > (Your WS_CLIENT Application) > Record Definitions > (Your Record Definition) > Details > Field Definitions > (Your Field Definition) > Field Value Selector (required)

3. Studio > Applications > (Your WS_Client Application) > Record Definitions > (Your Record Definition) > JMESPath Next Page URL Selector (optional)

It is useful to examine these in the context of the two most common forms of responses to requests for listing records:

1. Top-level array of objects

[
{"name": "Seattle", "state": "WA"},
{"name": "New York", "state": "NY"},
{"name": "Belleveue", "state": "WA"},
{"name": "Olympia", "state": "WA"}
]
2. Top-level object with property containing the array of objects

{
locations: [
{"name": "Seattle", "state": "WA"},
{"name": "New York", "state": "NY"},
{"name": "Bellevue", "state": "WA"},
{"name": "Olympia", "state": "WA"}
]
}
###### JMESPath Records Selector

The goal of the Records Selector is to identify where in the response data the array of objects we are interested in exists. For the typical response formats above, the JMESPath expression needed is quite simple. The input to the expression is the root of the response data.

1. Top-level array of objects: [*]

1. [] also works

2. Top-Level object with property containing the array of objects: locations[*]

1. locations[] or locations also work

For more complicated responses or if you need to assemble sets of objects from multiple locations or filter or transform the objects, you'll want to look through the tutorial and examples, but more complicated scenarios here are going to be rare.

###### Field Value Selector

The goal of the Field Value Selector is to identify the value of a particular field in each of the records in the array selected by the Record Selector. The input to the expression is an individual record from the array.

Using the above example responses in which each property has a single scalar (i.e., string, number, or boolean) value:

{"name": "Seattle", "state": "WA"}

The Field Value Selector is going to be just the name of the property within the object (e.g., name or state).

There are also many common variations.

1. Property is an array:

{"stooges": ["Larry", "Curly", "Moe"]}
1. For all values: stooges[*]

2. For just the first value: stooges[0]

3. For just the last value: stooges[-1]

2. Property is an object:

{"name": {"first": "John", "middle": "Paul", "last": "Jones"}}

This is a slightly harder case because Studio fields can only be a scalar value or an array of scalar values, so you need to decide on a case-by-case basis the strategy you want to use to represent the property. The most common strategy would be to flatten the structure by defining separate record fields for each of the sub-properties, as follows:

1. field givenName: name.first

2. field middleName: name.middle

3. field surname: name.last

3. Property is an array of objects (this is common in Google Directory API):

{
"phones": [
{"value": "+18005551212", "type": "work"},
{"value": "+1800COLLECT", "type": "home"},
]
}

This is even harder, but there are strategies that can work, such as:

1. flatten based on some key property

1. field workPhone: phones[?type == 'work'].value

2. field homePhone: phones[?type == 'home'].value

2. flatten and extract arrays of each sub-property in which you are interested

1. field phoneValues: phones[].value

1. which gives ["+18005551212", "+1800COLLECT"]

2. field phoneTypes: phones[].type

1. which gives ["work", "home"]

3. extract arrays of combined properties

1. field phones: phones[].join(':', [type, value])

1. which gives ["work:+18005551212", "home:+1800COLLECT"]

4. just pulling in as an array of JSON strings that can be parsed by a value template expression (JavaScript) further down the line

1. field phones: phones[].to_string(@)

2. which gives ["{\"value\": \"+18005551212\", \"type\": \"work\"}", "{\"value\": \"+1800COLLECT\", \"type\": \"home\"}"]

###### JMESPath Next Page URL Selector

The goal of the Next Page URL selector is to build a URL that can be used to get to the next page of results from an API that can break up large result sets across multiple web service requests/responses. It is only needed for APIs that break up the data into multiple "pages" using some method other than those Studio supports natively. Currently native paging support includes the following:

• Link HTTP response header with rel="next", e.g.:

• Link: <https://provider.example.com/users?offset=25&maxResults=25>; rel="next"

• top level links property in response data, with sub-property next, e.g.:

{
locations: [
{"name": "Seattle", "state": "WA"},
{"name": "New York", "state": "NY"},
{"name": "Bellevue", "state": "WA"},
{"name": "Olympia", "state": "WA"}
]
"self": "https://provider.example.com/users?offset=0&maxResults=25",
"next": "https://provider.example.com/users?offset=1&maxResults=25",
"last": "https://provider.example.com/users?offset=517&maxResults=25",
}
}

If the API requires paging but uses a scheme other than the above, you will need to define the Next Page URL selector. Unlike the other two JMESPath selectors, input to the Next Page URL Selector is not just the response data, but rather a JSON object that contains other contextual information as well:

• initialUrl: The URL used to get the previous page (before any redirection)

• finalUrl: the URL used to get the previous page (after all redirections)

• statusCode: the HTTP status code returned from the request for the previous page

• recordOffset: the total number of records returned by previous requests

• pageOffset: the total number of pages returned by previous requests

• data: the previous response JSON data

Link: <https://api.github.com/user/repos?page=3&per_page=100>; rel="next",<https://api.github.com/user/repos?page=50&per_page=100>; rel="last"

{
"next": "https://api.github.com/user/repos?page=3&per_page=100",
"last": "https://api.github.com/user/repos?page=50&per_page=100"
}

Example: OneRoster 1.1 uses query parameters offset and limit to control paging behavior. The specification recommends that servers should return link headers as described above, but some implementations do not. So in order to implement paging, configure the initial URL path for a user's records to be something close to https://oneroster.example.com/ims/oneroster/v1p1/users?offset=0&limit=500 and then, after handling the first page to generate the URL for the next page, you may be presented with something similar to this:

{
"initialUrl": "https//oneroster.example.com/ims/oneroster/v1p1/users?offset=0&limit=500",
"finalUrl": "https://oneroster-1.example.com/ims/oneroster/v1p1/users?offset=0&limit=500",
"statusCode": 200,
"recordOffset": 500,
"pageOffset": 1,
"data": {"users": [...]},
}

And now you can use a Next Page URL selector such as:

replace(finalUrl,'offset=\d*',concat('offset=',recordOffset))

to generate the next page URL: https://oneroster-1.example.com/ims/oneroster/v1p1/users?offset=500&limit=500

The JMESPath specification defines some built-in core functions but also allows implementations to support additional functions. The implementation used by Studio supports the following additional functions:

• concat(string, string[, ...])

• lower_case(string)

• matches(string, regex)

• normalize_space(string)

• substring_after(string, beforeString)

• substring_before(string, afterString)

• tokenize(string[, delimiterRegex])

• translate(string, fromChars, toChars)

• upper_case(string)

###### Record Definitions

Record Definitions allow Studio Administrators to define and configure the records that will be expected from the Data Provider. Each record can be customized if needed by clicking the Details button in the rightmost column.

The Details page provides more granular control over how the data will be organized, obtained, and displayed, and will vary depending on Connection Type. However, there are three common sections among the four types of applications, and their functions are similar.

General Settings

The General Settings section simply enables or disables the record definition, displays its name, and provides modifiable description text.

<Connection Type>

This section will vary by Connection Type, but the intended purpose of these fields is to configure how the connection will handle the data transfer. For a Web Service Client, this means configuring the method, relative URL additions, query parameters, HTTP headers if needed, and the JMESPath Records Selector for that record. For a Delimited Text File, the most important things to define for this record are the file name, whether the file has headers and those headers' names.

### Note

You can define the charSet on the connection object, but it does default to UTF-8. The syntax of the field must be consistent with what Java expects for Charset.forName. The eolCharacter can be defined on the connection and defaults to \n like Linux. RapidIdentity Studio does not support CSV files that start with BOM at this time.

###### Field Definitions

The Field Definitions section allows further control over how the fields in the records are handled within the record. Existing field definitions can be deleted, and new ones can be added. Clicking the Details button in the rightmost column opens a page with configuration options such as enabled status, field attributes, encryption, and field validation.

### Note

This button displays upon selecting the line item or hovering your cursor over that row's rightmost column.

General Settings

This section defines whether the field definition is enabled, its name, and visible description.

Web Service Client or Delimited Text File

Depending on the type of application being imported (.csv or API REST), there may either be a Field Value Selector input for a Web Service Client or a Column Name input for Delimited Text File in this section.

For Field Value Selector input, enter the JMESPath expression that selects the field value(s) from the record JSON or XML.

For Column Name input (this is a required field), define the column name that will be used for this definition.

### Note

Column Names cannot contain special characters, including any characters defined as delimited field separators or escape characters.

The Column Name is a required field and must be included throughout the life of the data in order for any data transfers to be successful throughout.

Field Attributes

The Field Attributes options provide an opportunity to define how the field will be handled by the system.

Table 28. Field Attributes Inputs

Field Name

Description

Format

Choose the format the field should be treated as. The choices are:

• Text

• Number

• Boolean

Reference To

Allows users to specify that this field definition maps to another field definition for another record definition. A field with this property filled correctly is called a reference, but can informally be considered a pointer to records of another record definition.

### Example

If you configure a record definition A with field definitions x and y, then you can create another record definition B with field definition z that is a reference to x. To do this, you would fill the Reference To attribute with the value A.x.

Now we can consider the field z to constitute a mapping from elements of B to elements of A. Mappings of this sort are useful for storing relationships between otherwise disparate record definitions. Use this to store a variety of different kinds of relationships, e.g., student to parent, student to school, section to course, etc.

Since z is a reference to A, it can be used in a Record Mapping to obtain other fields from A if needed. Usually, z would store some sort of identifier for A (such as recordKey or sourcedId). In practice, this association needs to be leveraged to obtain other fields within A. For example, a FIELD mapping with the value z.y to obtain the field y from A.

Field mapping references can include dereference operations of arbitrary depth, so this can become significantly more complex. Reference chains are also of arbitrary depth so long as each reference layer is correctly configured. For example, reference chain section.course.org.manager.employeeId for enrollment could provide an enrollment's class section, the course associated with the enrollment section, the org for the course, the manager of the org, and the employeeId of the manager.

Required

Whether this field will be required

Should Be Encrypted

Click this checkbox to apply encryption to this field

### Note

Encryption must be consistently applied across the full life of the relevant data, or data transfer will not be successful. After a change in Encryption is made, the associated job must be re-run in order for the encryption to take effect.

RapidIdentity Studio will not perform decryption of ciphertext between namespaces for field mappings of value type FIELD.

Case-Sensitive

Click this checkbox to enforce case sensitivity for this field

Single-Valued

Keep this checkbox activated to enforce single values for this field

Field Validation

These settings allow for TRACE level validation warnings in the job logs.

### Note

If data in a field fails validation during processing, any associated TRACE logs will capture the failure, but the data will pass through to the next phase of the process. This is currently not strictly enforced, but may be in future versions.

Table 29. Field Validation Inputs

Field Name

Description

Validation Pattern

Use this field to enter a regular expression validation pattern. This can be used to further narrow the scope for what is allowed to get through in this field

Minimum Length

The minimum number of characters required in this field

Maximum Length

The maximum number of characters allowed in this field

### Note

Remember to Save once all inputs have been populated.

###### Studio Value Expressions

In Studio > Applications > Application > Configure > Record Mappings > Record Type Mapping > Details > Field Mappings > Field Mapping > Details,

When VALUE TYPE is set to EXPRESSION, then VALUE should be one of:

• An ECMAScript expression

• An ECMAScript function definition

The expression needs to evaluate to (or if a function definition, needs to return) one of the following types:

• A (possibly empty) array of strings

• a string

• null

• undefined

Any other return type will be coerced to either a string or an array of strings.

The ECMAScript interpreter used is Mozilla Rhino (currently version 1.7.12), which implements ECMAScript 3 as well as some of the more important features of ECMAScript 2105 (aka ECMAScript 5) and beyond, including support for:

• basic let and const

• arrow functions

• JSON parse/stringify

• Array functional methods

Unlike Connect, Studio does not allow direct access for scripting Java classes, so you are limited to the core javascript functionality except as provided by two objects that are injected as in-scope variables into the evaluation context of the expression.

SRC and DEST Objects

The SRC object allows access to fields of the primary source record, as well as provides methods that allow you to access secondary source records. The DEST object allows similar access to the fields of the target record.

### Note

While either the source or target object may not actually exist (e.g., when we are about to create the target object or when the source object has been deleted), SRC and DEST will always be available but may not have any field values available.

Specifically, SRC will not have a source record available for a field mapping that is only ON_DELETE and the same will be true for DEST for a field mapping that is only ON_CREATE. However, in both cases, you will have access to other functions, such as listing records in the relevant namespace.

The SRC and DEST objects provide the following methods:

• get(fieldName:string)

• gets the values for the given field from the source/target record

• will always return an array of 0 or more strings, even if the requested field does not exist

• fields can also be accessed as properties (e.g., SRC.fieldName or SRC["fieldName"]) when the name of the field is known in advance and does not conflict with any other properties or method names (e.g., toString, get, getRecord, listRecords)

• getRecord(id:string)

• gets the source/target record with the given id

• getRecord(typeName:string, recordKey:string)

• gets the source/target record of the given type and recordKey

• getRecords(ids:string[])

• gets the source/target records with the given ids

• getRecords(type:string, field:string, value:string)

• gets source target records of the given type where the given field equals the given value

• listRecords(typeName:string[, filter:string[, maxResults:number[, orderBy:string[, firstResult:number]]]])

### Note

This will use the same syntax as all the other filters in Studio.

• gets source/target records of the given type and optional search criteria

Records returned by getRecord(), getRecords(), and listRecords() support the same interface for accessing fields and other records as do SRC and DEST.

### Note

Use of methods to access other records other than SRC and DEST should be used sparingly, since they do not scale particularly well when the record mapping is applied to millions of records.

LOGGER Variable

The LOGGER variable should always be available in record mapping expressions, and provides access to the following methods:

• LOGGER.trace(message:object)

• Emit a log at the TRACE level

• LOGGER.debug(message:object)

• Emit a log at the DEBUG level

• LOGGER.info(message:object)

• Emit a log at the INFO level

• LOGGER.warn(message:object)

• Emit a log at the WARN level

• LOGGER.error(message:object)

• Emit a log at the ERROR level

These logs are subject tot he logging level configured on the Studio job.

The methods above accept arbitrary objects (not just strings - a list, an exception, etc.)

###### Studio OneRoster Manifest

The Studio OneRoster Manifest is a .csv file that is exported with other delimited text files to be pushed out, and represents the information behind delimited text applications.

The manifest consists of the resource types and populated fields that start with file to tell the system what information is coming in for a Studio CSV Consumer. For each included field, the options are as follows:

• Bulk - Creates new data within the system

• Delta - Notes any changes in the data that is already in the system

### Note

RapidIdentity Studio can not yet detect Delta changes, so it's important that the user populate those values appropriately.

• Absent - There is no data available for this field

The individual fields are defined by the imported .csv and presented by RapidIdentity with the options above, but there are other options for this menu item that are static.

Table 30. OneRoster Table Options

Field

Description

Is OneRoster

Toggle this to enforce the OneRoster setting in RapidIdentity.

### Note

When toggled off, data does not persist and will need to be reviewed when toggled back on. A .csv file would need to be configured in Record Definitions for this feature to be re-enabled.

OneRoster Version

The OneRoster version defaults to 1.1, but can be altered as needed by the user.

### Note

Users are responsible for keeping track of the version being pushed.

Manifest Version

The Manifest version defaults to 1.0, but can be altered as needed by the user.

### Note

Users are responsible for keeping track of the version being pushed.

Individual file Fields

These are the fields brought in from the .csv file configured in Record definitions. These resource types require the data statuses described above and include:

• academicSessions

• categories

• classes

• classResources

• demographics

• enrollments

• lineItems

• orgs

• resources

• results

### Note

Before running a job for this consumer, verify that the fields listed are the same as they are within the .csv file.

Source Fields

Source System Name and Source System Code can remain blank if there is no data in the manifest for these fields.