AI Issue Breakdown Assistant for Jira

Documentation and Knowledge Base

Functional Overview

AI Issue Breakdown for Jira introduces automated issue generation functionality. It exposes a new Generate Issues button in the Issue View screen. Once button is pressed:

  • The user is prompted to select a Template and Issue Type. Template specifies the strategy for issue generation. Valid options are User Stories and Test Cases. Issue Type specifies the type of the newly generated issues – this dropdown is automatically populated with the available issue types in the current project.

  • Once submitted, the issue summary and description are sent for analysis to an external AI engine. The AI engine will breakdown the issue and generate child user stories or test cases – the results will be displayed to the user with the option to be changed.

  • Once confirmed, the new issues will be created and linked to the parent one. In case of Epic, issues will be created as Epic children.

The AI engine, used to generate issues is configurable. The app operates in one of two modes:

  • Cloud AI – Utilizes the OpenAI GPT-4o model, hosted in the Internet
  • Private AI – ANY custom or Private AI model can be integrated, through a Groovy script

Architectural Overview

When Cloud AI Mode is enabled, AI Issue Breakdown Assistant integrates Jira with an external AI provider for issue generation through a Heroku server. This can be visualized as follows:

The Deview Studios app server is hosted on the Heroku platform and is the entry point for the integration.

AI Issue Breakdown Assistant also supports Private AI Mode. If enabled, the Heroku cloud server is not used and no data is sent to the Internet, with the expectation that there is a Private AI model hosted within the same datacenter:

Integration process triggers whenever the Generate Issues button is pressed in the Issue View screen and the form is submitted.

The integration process can be visualized as follows. There’s a difference in how issue generation is done, depending if Cloud AI mode or Private AI mode is enabled.

If Private AI mode is enabled, Heroku cloud server is not used and no data is sent to the Internet.

Requirements

The following conditions must be met in order to use AI Issue Breakdown Assistant for Jira:

  • For Cloud AI mode – connectivity to deview-dc-prod-30130e64aa60.herokuapp.com via HTTP/TCP on port 443 (HTTP proxy is supported)
  • For Private AI mode – an internally hosted Private AI server must be available and reachable from Jira

Installation

AI Issue Breakdown Assistant for Jira is installed as a standard application through the Universal Plugin Manager (UPM).

Follow these steps:

  1. Login as a system administrator
  2. Navigate to Jira Administration -> Manage Apps
  3. In the Search the Marketplace field type AI Issue Breakdown Assistant for Jira
  4. The app will be filtered In the results panel. Click the Free Trial or Buy Now buttons
  5. Accept the terms and conditions
  6. Once app is installed, click on Get license and follow the instructions
  7. You will need to choose the application integration mode as Cloud AI or Private AI. To do that, navigate to the Manage Apps screen, open AI Issue Breakdown Assistant for Jira and select the Configure button. Reference the Configuration section on this page for more information on which mode to choose. 
Note that AI Issue Breakdown Assistant for Jira uses standard Atlassian licensing model. Trial period is available.
 

Configuration

AI Issue Breakdown Assistant for Jira Data Center requires that you select the app Integration mode, before you can use it.

There is a separate configuration page available, with all configuration options. To navigate it, go to to the Manage Apps screen, open AI Issue Breakdown Assistant for Jira and select the Configure button.

The following configuration settings are available:

  • Integration mode – specifies how the application integrates with the selected AI provider. This setting is not selected by default and must be configured by a system administrator for the app to function. There’s two options to choose from: Cloud AI mode allows you to utilize OpenAI GPT-4o. Internet connectivity is required. Private AI mode allows you to use an internally-hosted AI model, for maximum privacy. No Internet connectivity is required.
  • Integration script (only available in Private AI mode) – defines the connection protocol with the Private AI mode (additional details available below)
  • Proxy settings (only available in Cloud AI mode) – HTTP proxy configuration settings required for Jira Data Center instances that don’t have direct Internet connectivity
  • Issue Generation policy – Enable or disable Generate Issues button for issues. Exceptions can be applied through the Policy exceptions field.
  • Policy exceptions – apply project exceptions to the “Issue generation policy”. This value must be a list of project keys, separated by comma (no whitespaces). If “Issue generation policy” is set to Enable, all projects specified in this field will have Generate Issues disabled. If “Issue generation policy” is set to Disable, all projects specified in this field will have Generate Issues enabled.
  • Language – language in which issues should be generated. This setting will attempt to instruct the AI model to respond in a particular language, however, it is not always guaranteed that this will be considered.
 
Private AI Mode Integration Script

With Private AI mode enabled, you can bring your own AI model as an issue generation engine, used by the application. You will need to configure a simple Groovy integration script, that defines the communication protocol with the AI engine. Without configuring the script, the app will not function properly and will not be able to generate issues. By default, a sample script is available that serves as example – it can be modified or adapted. 

The script will be invoked every time when the Generate Issues form is submitted. 

Script invocation lifecycle is as follows:

  1. Generate Issues form is submitted
  2. Groovy script execution is triggered
  3. Groovy script integrates with a Private AI model over HTTP or other means
  4. Groovy script returns a single String result, which contains the generated issues

For proper execution the following best practises must be followed:

  • The script must always return the text response from the AI engine as a String result.
  • A Groovy parameter query is passed. It contains the AI query – it must be forwarded without change to the AI engine.
  • In case an error occur, an Exception should be thrown. Example: throw new Exception(“An error has occured…”)
  • Always ensure you put a timeout to all blocking operations, especially HTTP calls.

The following variables are available for use within the script:

  • query – query to send to the AI model
  • language – selected language
  • logger – may be used for logging. Use as: logger.info(“message”) | logger.debug(“message”) | logger.error(“message”). The resulting logs will be available in the standard atlassian-jira.log files – please ensure you enable the required level of logging for the com.deview_studios.atlassian.jira.dc.ai_issue_generator package from the Logging and profiling menu.
It’s recommended to enable DEBUG logging while developing the script to see if you’re receiving errors. Please see section Logging. 
If you need help developing the script, don’t hesitate to reach to a Deview Studios representative through our standard channels. No additional charges apply.

The following is an example integration with OpenAI GPT-4o through the Groovy script:
/* MODIFY THIS SCRIPT TO INTEGRATE YOUR AI MODEL. THIS IS JUST AN EXAMPLE. */
import com.atlassian.jira.util.json.*

def payload = new JSONObject() // Prepare body as per API specification
    .put("model", "gpt-4o")
    .put("messages", new JSONArray()
        .put(new JSONObject()
            .put("role", "user")
            .put("content", query)))
    .put("temperature", 1.0)
    .toString();

def post = new URL("https://api.openai.com/v1/chat/completions").openConnection() // Create a new request to specified URL
logger.debug("Preparing POST body: " + payload)

post.setConnectTimeout(60000) // Set a timeout to 60 seconds - make sure to always set timeouts
post.setReadTimeout(60000); // Set a timeout to 60 seconds - make sure to always set timeouts
post.setRequestMethod("POST") // Specify HTTP method
post.setDoOutput(true) // Specify our HTTP request has a body
post.setRequestProperty("Content-Type", "application/json") // Set HTTP header
post.setRequestProperty("Authorization", "Bearer REPLACE_ME") // Set HTTP header
post.getOutputStream().write(payload.getBytes("UTF-8")) // Set HTTP body

def postRC = post.getResponseCode() // Send POST request
def response = post.getInputStream().getText() // Read response body
logger.debug("Response code is: " + postRC + "; Response body is: " + response)

if (postRC.equals(200)) { // Verify request is successful
    def json = new JSONObject(response) // Read as JSON
    def result = json.getJSONArray("choices").getJSONObject(0).getJSONObject("message").get("content") // Traverse JSON tree
    return result // This script must always return the produced issues
}

throw new Exception("Unsuccessful request! HTTP response code is " + postRC + "; Response body is: " + response) // Throw error due to unsuccessful request
 

Usage

To use AI Issue Breakdown Assistant for Jira, navigate to an already created issue and click the Generate Issues button:

  • The user is prompted to select a Template and Issue Type. Template specifies the strategy for issue generation. Valid options are User Stories and Test Cases. Issue Type specifies the type of the newly generated issues – this dropdown is automatically populated with the available issue types in the current project.

  • Once submitted, the issue summary and description are sent for analysis to an external AI engine. The AI engine will breakdown the issue and generate child user stories or test cases – the results will be displayed to the user with the option to be changed.

  • Once confirmed, the new issues will be created and linked to the parent one. In case of Epic, issues will be created as Epic children.


Limitations

AI Issue Breakdown Assistant for Jira has the following limitations:

  • In Cloud AI mode, issue generation will be performed with a maximum of 4096 OpenAI tokens. Not applicable in Private AI mode.

Logging

To enable logging for AI Issue Breakdown Assistant, go to Jira Adminsitration -> System -> Logging and Profiling and click on Configure under Default loggers. Add the following entry: com.deview_studios.atlassian.jira.dc.ai_issue_generator and set the logging level to DEBUG.

Known Issues

AI Issue Breakdown Assistant for Jira has the following known issues:

  • Generated issues are not relevant or incorrect. Try adding more information in the issue description field with sufficient context. Remember, the AI provider doesn’t have any context except what is provided in the summary and description fields.
  • The following error message is displayed when Generate Issues is clicked – “License for this feature is not set or has expired. Please contact your system administrator”. Please ensure your app is licensed.
Deview Studios Logo

 

Contact Us

Copyright © 2022 Deview Studios