Documentation and Knowledge Base
Functional Overview
AI Issue Breakdown for Jira introduces automated issue generation functionality. It exposes a new Generate Issues button in the Issue View screen. Once button is pressed:
The AI engine, used to generate issues is configurable. The app operates in one of two modes:
When Cloud AI Mode is enabled, AI Issue Breakdown Assistant integrates Jira with an external AI provider for issue generation through a Heroku server. This can be visualized as follows:
The Deview Studios app server is hosted on the Heroku platform and is the entry point for the integration.
AI Issue Breakdown Assistant also supports Private AI Mode. If enabled, the Heroku cloud server is not used and no data is sent to the Internet, with the expectation that there is a Private AI model hosted within the same datacenter:
Integration process triggers whenever the Generate Issues button is pressed in the Issue View screen and the form is submitted.
The integration process can be visualized as follows. There’s a difference in how issue generation is done, depending if Cloud AI mode or Private AI mode is enabled.
If Private AI mode is enabled, Heroku cloud server is not used and no data is sent to the Internet.
The following conditions must be met in order to use AI Issue Breakdown Assistant for Jira:
AI Issue Breakdown Assistant for Jira is installed as a standard application through the Universal Plugin Manager (UPM).
Follow these steps:
AI Issue Breakdown Assistant for Jira Data Center requires that you select the app Integration mode, before you can use it.
There is a separate configuration page available, with all configuration options. To navigate it, go to to the Manage Apps screen, open AI Issue Breakdown Assistant for Jira and select the Configure button.
The following configuration settings are available:
With Private AI mode enabled, you can bring your own AI model as an issue generation engine, used by the application. You will need to configure a simple Groovy integration script, that defines the communication protocol with the AI engine. Without configuring the script, the app will not function properly and will not be able to generate issues. By default, a sample script is available that serves as example – it can be modified or adapted.
The script will be invoked every time when the Generate Issues form is submitted.
Script invocation lifecycle is as follows:
For proper execution the following best practises must be followed:
The following variables are available for use within the script:
/* MODIFY THIS SCRIPT TO INTEGRATE YOUR AI MODEL. THIS IS JUST AN EXAMPLE. */ import com.atlassian.jira.util.json.* def payload = new JSONObject() // Prepare body as per API specification .put("model", "gpt-4o") .put("messages", new JSONArray() .put(new JSONObject() .put("role", "user") .put("content", query))) .put("temperature", 1.0) .toString(); def post = new URL("https://api.openai.com/v1/chat/completions").openConnection() // Create a new request to specified URL logger.debug("Preparing POST body: " + payload) post.setConnectTimeout(60000) // Set a timeout to 60 seconds - make sure to always set timeouts post.setReadTimeout(60000); // Set a timeout to 60 seconds - make sure to always set timeouts post.setRequestMethod("POST") // Specify HTTP method post.setDoOutput(true) // Specify our HTTP request has a body post.setRequestProperty("Content-Type", "application/json") // Set HTTP header post.setRequestProperty("Authorization", "Bearer REPLACE_ME") // Set HTTP header post.getOutputStream().write(payload.getBytes("UTF-8")) // Set HTTP body def postRC = post.getResponseCode() // Send POST request def response = post.getInputStream().getText() // Read response body logger.debug("Response code is: " + postRC + "; Response body is: " + response) if (postRC.equals(200)) { // Verify request is successful def json = new JSONObject(response) // Read as JSON def result = json.getJSONArray("choices").getJSONObject(0).getJSONObject("message").get("content") // Traverse JSON tree return result // This script must always return the produced issues } throw new Exception("Unsuccessful request! HTTP response code is " + postRC + "; Response body is: " + response) // Throw error due to unsuccessful request
AI Issue Breakdown Assistant for Jira has the following limitations:
To enable logging for AI Issue Breakdown Assistant, go to Jira Adminsitration -> System -> Logging and Profiling and click on Configure under Default loggers. Add the following entry: com.deview_studios.atlassian.jira.dc.ai_issue_generator and set the logging level to DEBUG.
AI Issue Breakdown Assistant for Jira has the following known issues: