Documentation and Knowledge Base
Functional Overview
Private AI for Confluence enhances content creation and collaboration within your Confluence environment by integrating ANY cloud or private AI model. This app is designed to integrate seamlessly with your internal systems, ensuring maximum privacy for your data.
Key features include:
Private AI for Confluence allows you to integrate an internally hosted AI model, which means all processing occurs within your internal network, ensuring data privacy and security.
Private AI for Confluence integrates with an AI model through a Groovy script. This enables integration with ANY internal or external models. Using a private model ensures no data is sent to the Internet. This can be visualized as follows:
Integration process triggers whenever the user interacts with any of the AI Assistant features.
Requirements
The following conditions must be met in order to use Private AI for Confluence:
Private AI for Confluence is installed as a standard application through the Universal Plugin Manager (UPM).
Follow these steps:
Private AI for Confluence requires that you configure the app before you can use it.
There is a separate configuration page available, with all configuration options. To navigate it, go to to the Manage Apps screen, open Private AI for Confluence and select the Configure button.
The following configuration settings are available:
To use the app you will need to configure a simple Groovy integration script, that defines the communication protocol with the AI engine. Without configuring the script, the app will not function properly. By default, a sample script is available that serves as example – it can be modified or adapted.
The script will be invoked every time when a user interacts with the app.
Script invocation lifecycle is as follows:
For proper execution the following best practises must be followed:
The following variables are available for use within the script:
/* MODIFY THIS SCRIPT TO INTEGRATE YOUR AI MODEL. THIS IS JUST AN EXAMPLE. */ import com.atlassian.json.jsonorg.*; import org.apache.http.entity.StringEntity; def messagesArray = new JSONArray(); messagesArray.put(new JSONObject().put("role", "user").put("content", query)); def payload = new JSONObject() // Prepare body as per API specification .put("model", "gpt-4o") .put("messages", messagesArray) .put("temperature", 1.0) .toString(); def post = httpClient.newPost("https://api.openai.com/v1/chat/completions"); // Create a new request to specified URL post.setEntity(new StringEntity(payload, java.nio.charset.StandardCharsets.UTF_8)); // Set HTTP payload post.addHeader("Content-Type", "application/json"); // Set HTTP header post.addHeader("Authorization", "Bearer REPLACE_ME"); // Set HTTP header JSONObject json = new JSONObject(httpClient.sendRequest(post)); // Parse response as JSON def result = json.getJSONArray("choices").getJSONObject(0).getJSONObject("message").getString("content"); // Traverse JSON tree return result; // This script must always return the produced result
Private AI for Confluence has the following limitations:
To enable logging for Private AI for Confluence, go to Confluence Adminsitration -> Administration -> Logging and Profiling and click on Add New Entry. Add the following entry: com.deview_studios.atlassian.confluence.private_ai and set the logging level to DEBUG.
Private AI for Confluence has the following known issues: