Documentation
Microsoft Dynamics 365
For a general introduction to the connector, please refer to RheinInsights Microsoft Dynamics 365 Connector. This connector supports Microsoft Dynamics 365 Server Version 8.x and above.
Dynamics 365 Configuration
Our Microsoft Dynamics 365 Connector supports user based authentication against Dynamics and uses the REST APIs provided by your instance.
Therefore, it uses a crawl user for accessing the data. Authentication can take place via NTLM or Kerberos. We recommend that the user’s password does not expire.
Permissions
The crawl user needs to have the following read permissions.
System users
Teams
Team members
Business units
Roles
Role collections
Role privileges
Team roles
User roles
Accounts
Addresses
Annotations
Phone calls
Posts
App modules
Contacts
Contracts
Incidents
KB articles
Knowledge articles
Leads
Opportunities
Sales orders
Active Directory
Due to the nature of Dynamics 365 on-premises user ids, being sAMAccountNames, it might be needed to map these to userprincipalnames, i.e., mail addresses. Therefore, a separate user must be used for user name mapping leveraging Active Directory. This user must have read access to the users in the global catalog of your Active Directory.
Please refer to Ldap/Active Directory Security Transformer for the according configuration instructions.
Content Source Configuration
The content source configuration of the connector comprises the following mandatory configuration fields.

Base URL. This is the root url of your Dynamics instance. Please add it without trailing slash
Authentication method. Here you need to choose NTLM or Kerberos.
Crawl user. This is the login name of the crawl user. The user must have the permissions as described above and the user name must be provided as domain\samaccountname.
Crawl user’s password. This is the according password of the crawl user
Public keys for SSL certificates: this configuration is needed, if you run the environment with self-signed certificates, or certificates which are not known to the Java key store.
We use a straight-forward approach to validate SSL certificates. In order to render a certificate valid, add the modulus of the public key into this text field. You can access this modulus by viewing the certificate within the browser.

Included types. This is a list of Dynamics Entities, which are included in a crawl. Indirectly, each entity is enriched, if applicable, with associated posts, notes, annotations or incident resolutions. The supported entity types are
accounts
,incidents
,contacts
,contracts
,kbarticles
,knowledgearticles
,leads
,opportunities
,salesorders.
Excluded attachments: the file suffixes in this list will be used to determine if certain documents should not be indexed, such as images or executables.
Include post contents in crawling. If this is enabled, associated post entities are extracted and attached to the according parent entities as listed in included types.
Include resolution contents in crawling. If this is enabled, resolution notes are extracted and attached to the associated incident entities.
Include phone call and mail contents in crawling. If this is enabled, associated phone call entities are extracted and attached to the according parent entities as listed in included types.
API Version. Please add the API version here which should be used by the connector. The connector then connects against <baseUrl>/api/data/<api version>/…
Rate limit. This determines how many HTTP requests per second will be issued against Dynamics 365.
The general settings are described at General Crawl Settings and you can leave these with its default values.
After entering the configuration parameters, click on validate. This validates the content crawl configuration directly against the content source. If there are issues when connecting, the validator will indicate these on the page. Otherwise, you can save the configuration and continue with Content Transformation configuration.
Recommended Crawl Schedules
Content Crawls
The connector supports incremental crawls. These are based on sorting as part of the Dynamics APIs, which is very limited and will generally only detect new entities but often not changed entities. Deletions are not detected in this mode at all. Incremental crawls should run every 15 minutes.
Due to the limitations of the incremental crawls, we recommend to run a Full Scan Crawl every few hours or daily.
For more information see Crawl Scheduling .
Principal Crawls
Depending on your requirements, we recommend to run a Full Principal Scan every day or less often.
For more information see Crawl Scheduling .