Konveyor AI (kai) is Konveyor's approach to easing modernization of application source code to a new target by leveraging LLMs with guidance from static code analysis augmented with data in Konveyor that helps to learn how an Organization solved a similar problem in the past.
In this demo, we will showcase the capabilities of Konveyor AI (Kai) in facilitating the modernization of application source code to a new target. We will illustrate how Kai can handle various levels of migration complexity, ranging from simple import swaps to more involved changes such as modifying scope from CDI bean requirements. Additionally, we will look into migration scenarios that involves EJB Remote and Message Driven Bean(MBD) changes
We will focus on migrating a partially migrated JavaEE Coolstore application to Quarkus, a task that involves not only technical translation but also considerations for deployment to Kubernetes. By the end of this demo, you will understand how Konveyor AI (Kai) can assist and expedite the modernization process.
This demo will focus primarly in driving Kai's RPC server via helper scripts. This guide will not cover the IDE integration.
- Docker
- Git
- GenAI credentials
- Maven
- Quarkus 3.10
- Java 17
You can configure Kai in multiple ways. The best
way to configure Kai is with a config.toml
file. For this walkthrough, we
will create example/config.toml
with a very minimal configuration to get
started.
When not running in demo mode, you will need to select an LLM model to use with Kai. Here are some examples of how to configure Kai to use different models. For more options, see llm_selection.md.
Important
The demo assumes you are using IBM-served Llama 3. If you are using a different model, the responses you get back may be different.
Warning
In order to use this service an individual needs to obtain a w3id from IBM. The kai development team is unable to help obtaining this access.
- Login to https://bam.res.ibm.com/.
- To access via an API you can look at ‘Documentation’ after logging into https://bam.res.ibm.com/. You will see a field embedded in the 'Documentation' section where you can generate/obtain an API Key.
- Ensure you have exported the key via
export GENAI_KEY=my-secret-api-key-value
.
Next, paste the following into your example/config.toml
file:
[models]
provider = "ChatIBMGenAI"
[models.args]
model_id = "meta-llama/llama-3-70b-instruct"
parameters.max_new_tokens = 2048
- Obtain your AWS API key from Amazon Bedrock.
- Export your
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
, andAWS_DEFAULT_REGION
environment variables.
Next, paste the following into your build/config.toml
file:
[models]
provider = "ChatBedrock"
[models.args]
model_id = "meta.llama3-70b-instruct-v1:0"
Finally, run the backend with podman compose up
.
- Follow the directions from OpenAI here.
- Ensure you have exported the key via
export OPENAI_API_KEY=my-secret-api-key-value
Next, paste the following into your example/config.toml
file:
[models]
provider = "ChatOpenAI"
[models.args]
model = "gpt-3.5-turbo"
By default, this example demo script will run Kai in demo mode, which uses cached responses. This is helpful if you don't have access to an LLM. Additionally, the responses are determinstic rather than slight changes based oan LLM response.
Download Kai server and analyzer binaries for your machine and place them in
our example directory. First, download the zip file for your OS
here. Unzip the
directory and copy the binaries to example/analysis
.
$ cp ~/Downloads/kai-rpc-server.linux-x86_64/kai-rpc-server example/analysis/
$ cp ~/Downloads/kai-rpc-server.linux-x86_64/kai-analyzer-rpc example/analysis/
The simplest way to start JDTLS is to use docker:
$ docker run -d --name=bundle quay.io/konveyor/jdtls-server-base:latest
Copy the JDTLS binary and other needed files out of the container:
$ docker cp bundle:/usr/local/etc/maven.default.index ./example/analysis
$ docker cp bundle:/jdtls ./example/analysis
$ docker cp bundle:/jdtls/java-analyzer-bundle/java-analyzer-bundle.core/target/java-analyzer-bundle.core-1.0.0-SNAPSHOT.jar ./example/analysis/bundle.jar
$ docker cp bundle:/usr/local/etc/maven.default.index ./example/analysis
Now example/analysis
contains all of the needed dependencies to run Kai
Let's clone the Coolstore application, which we will be used demo the migration process to Quarkus.
Clone the Coolstore demo from its repository:
$ cd example/
$ git clone https://github.com/konveyor-ecosystem/coolstore.git
We will use a helper script that will perform an analysis of the Coolstore application using the following migration targets to identify potential areas for improvement:
- jakarta-ee
- jakarta-ee8
- jakarta-ee9
- quarkus
To run Kai, we can use run_demo.py
which will start up the Kai RPC server and run the limited agentic workflow to migrate the coolstore application to quarkus.