Atlassian Jira

For a general introduction to the connector, please refer to https://www.rheininsights.com/en/connectors/jira.php.

Jira Configuration

Crawl User

The connector needs a crawl user which has the following permissions:

  1. Read access to all projects and issues, which should be indexed

  2. Permission to access all project and issue permissions, as well as security permissions

  3. Read access to all users and groups

Please note that the connector uses basic auth to authenticate the user.

Password Policy

The crawl user must have no password rotation or the password needs to be reset when it changes.

Content Source Configuration

The content source configuration of the connector comprises the following mandatory configuration fields.

  1. URL is the hostname, including protocol and port (if applicable)

  2. Username: is the user name which is used by the connector to crawl the instance. Please see the section above for the necessary user permissions.

  3. Password: is the corresponding password for the crawl user

  4. Public keys for SSL certificates: this configuration is needed, if you run the environment with self-signed certificates, or certificates which are not known to the Java key store.

    We use a straight-forward approach to validate SSL certificates. In order to render a certificate valid, add the modulus of the public key into this text field. You can access this modulus by viewing the certificate within the browser.

  1. Excluded files from crawling: here you can add file extensions to filter attachments which should not be sent to the search engine.

  2. Excluded spaces from crawling: here you can add space names or keys to exclude these from crawling
    Please note that a change to this list will yield an incremental crawl to remove all pages and attachments from excluded spaces.

  3. Included spaces from crawling: this is an include list. If empty, all spaces but the excluded spaces are indexed. But if you add at least one entry (even an empty one), only this space will be included for crawling. Please note that a change to this list will yield an incremental crawl to remove all pages and attachments from excluded spaces.

  4. The general settings are described at General Crawl Settings and you can leave these with its default values.

After entering the configuration parameters, click on validate. This validates the content crawl configuration directly against the content source. If there are issues when connecting, the validator will indicate these on the page. Otherwise, you can save the configuration and continue with Content Transformation configuration.

Limitations for Incremental Crawls and Recommended Crawl Schedules

Atlassian Jira does not offer a complete change log. This means that incremental crawls can detect new and changed projects, issues and attachments. However, removed projects, issues or attachments will not be detected in incremental crawls, as well as significant changes to the project permission schemes.

Therefore, we recommend to configure incremental crawls to run every 15-30 minutes, full scan principal crawls to run twice a day, as well as a weekly full scan of the documents of the Confluence instance. For more information see Crawl Scheduling .

Please furthermore note that due to API limitations of Jira server, the connector does not support issue level security.

Furthermore, it is a design decision that an issue is indexed with the body content being the summary and all comments. We know that comments can have their own viewing permissions and the connector does not support splitting up comments into separate documents with separate ACLs.