Skip to content

Commit a5eb720

Browse files
rpranabPranab Rajbhandari
and
Pranab Rajbhandari
authored
Release notes gen ai/7.0.0 (#1756)
* changes for website * release note for 6.11.3 * changes * changes * changes for release note * changes * small style updates --------- Co-authored-by: Pranab Rajbhandari <pranabrajbhandari@Pranabs-MacBook-Pro.local> Co-authored-by: diatrambitas <JSL.Git2018>
1 parent bb6794d commit a5eb720

File tree

20 files changed

+618
-211
lines changed

20 files changed

+618
-211
lines changed

docs/_data/navigation.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ annotation-lab:
113113
url: /docs/en/alab/llm_prompts
114114
- title: Playground
115115
url: /docs/en/alab/playground
116-
- subtitle: Project
116+
- subtitle: Project and Tasks
117117
children:
118118
- title: Dashboard
119119
url: /docs/en/alab/project_dashboard
@@ -129,7 +129,7 @@ annotation-lab:
129129
url: /docs/en/alab/tasks
130130
- title: Workflows
131131
url: /docs/en/alab/workflow
132-
- subtitle: Project Configuration
132+
- subtitle: Project Types
133133
children:
134134
- title: Overview
135135
url: /docs/en/alab/tags_overview
@@ -143,7 +143,7 @@ annotation-lab:
143143
url: /docs/en/alab/tags_image
144144
- title: PDF
145145
url: /docs/en/alab/tags_pdf
146-
- subtitle: Annotation
146+
- subtitle: Data Annotation
147147
children:
148148
- title: Manual Annotation
149149
url: /docs/en/alab/annotation

docs/_includes/docs-annotation-pagination.html

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010
</li>
1111
</ul>
1212
<ul class="pagination owl-carousel pagination_big">
13+
<li><a href="release_notes_7_0_0">7.0.0</a></li>
1314
<li><a href="release_notes_6_11_3">6.11.3</a></li>
1415
<li><a href="release_notes_6_11_2">6.11.2</a></li>
1516
<li><a href="release_notes_6_11_1">6.11.1</a></li>
431 KB
Loading
613 KB
Loading
5.49 MB
Loading
407 KB
Loading
275 KB
Loading
752 KB
Loading
119 KB
Loading
335 KB
Loading
80.6 KB
Loading
146 KB
Loading
122 KB
Loading

docs/en/alab/annotation.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,15 @@ On the Labeling page, when selecting the <es>Prediction</es> widget, users can s
6969

7070
</div><div class="h3-box" markdown="1">
7171

72+
### Enable Bulk Hiding of Labels
73+
Users can hide multiple labels at once, significantly improving efficiency when working with large datasets.
74+
75+
Previously, labels had to be hidden individually, making the process tedious and time-consuming. With this update, an eye icon has been added to the Annotations widget, enabling users to hide all labels within selected groups with a single click. To use this feature, users must switch from Region View to Labels View in the annotation widget.
76+
77+
With this improvement, users can now manage labels more effectively, reducing manual effort and enhancing focus during the annotation process.
78+
79+
![700image](/assets/images/annotation_lab/7.0.0/10.gif)
80+
7281
### Annotations
7382

7483
The Annotations widget has two sections.
Lines changed: 185 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,185 @@
1+
---
2+
layout: docs
3+
header: true
4+
seotitle: Generative AI Lab | John Snow Labs
5+
title: Generative AI Lab 7.0.0
6+
permalink: /docs/en/alab/annotation_labs_releases/release_notes_7_0_0
7+
key: docs-licensed-release-notes
8+
modify_date: 2025-03-27
9+
show_nav: true
10+
sidebar:
11+
nav: annotation-lab
12+
---
13+
14+
<div class="h3-box" markdown="1">
15+
16+
## 7.0.0
17+
18+
## Generative AI Lab 7: Accelerating Clinical Annotation with HCC Coding
19+
20+
Generative AI Lab 7 brings many improvements that directly support real-world healthcare annotation and coding use cases. Most notably, it introduces support for Hierarchical Condition Category (HCC) coding—enabling users to streamline clinical risk adjustment workflows by automatically linking ICD codes to HCC categories, prioritizing high-value tasks, and validating codes more efficiently. The release also enables HTML-based projects to leverage Inter-Annotator Agreement (IAA) analytics for quality assurance, simplifies licensing across the suite of John Snow Labs products, and improves training scalability with dataset sampling. Enhancements to the annotation interface—including bulk label management and more precise zoom controls—further increase speed and usability. Combined with a robust set of stability and performance fixes, these capabilities give medical coders, clinicians, and data scientists the tools they need to annotate faster, train better models, and ensure higher data accuracy across large-scale projects.
21+
22+
## Support for HCC Coding
23+
This release introduces support for HCC Coding for text and PDF content. The system now maps detected ICD-10 codes to their corresponding HCC codes, streamlining clinical risk adjustment workflows and insurance claim verification.
24+
25+
**New project types:**
26+
1. **HCC Coding for Text**
27+
2. **HCC Coding for PDF and Text (side by side)**
28+
29+
These project types enable the association of HCC codes with annotated clinical entities using preconfigured lookup datasets, reducing manual input and improving consistency in medical coding.
30+
31+
![700image](/assets/images/annotation_lab/7.0.0/1.png)
32+
33+
### Usage Instructions
34+
To enable **HCC Coding Support**, follow these steps:
35+
36+
To enable HCC Coding Support, follow these steps:
37+
38+
39+
**1.Project Setup**
40+
- Select either of the new project templates during project creation.
41+
- Choose the HCC Coding for PDF and Text (side by side) option if you need a visual representation of the original document while performing HCC coding.
42+
43+
![700image](/assets/images/annotation_lab/7.0.0/2.png)
44+
45+
**2.Label Customization** - On the Customize Labels page, users can either:
46+
- Apply a lookup dataset globally, to all labels in your taxonomy at once.
47+
- Assign Lookup options to specific labels.
48+
49+
![700image](/assets/images/annotation_lab/7.0.0/3.png)
50+
51+
**3.Annotation Process**
52+
- Annotate entities and assign codes using the annotation widget.
53+
- Edit codes inline or through the Annotation Widget from the right panel.
54+
- Annotated chunks are listed under their respective labels. Users can expand labels by clicking the down arrow to view all chunks associated with them.
55+
- Lookup code can be edited or updated directly from labeled tokens or via the labeling section by clicking the edit button.
56+
- Predictions can be copied to generate a completion, allowing the HCC code to be reviewed using the annotation widget on the right.
57+
58+
![700image](/assets/images/annotation_lab/7.0.0/4.gif)
59+
60+
**4.Review and Confirmation**
61+
Once a task is labeled and lookup codes are assigned along with HCC Codes, reviewers have the following options:
62+
- Accept and confirm the labeled text.
63+
- Decline and remove the labels.
64+
- Tag the label as non-billable, if necessary.
65+
66+
![700image](/assets/images/annotation_lab/7.0.0/5.png)
67+
68+
### Raking Score Integration
69+
70+
Tasks can now include **ranking scores** to support triaging and prioritization, allowing users to manage large annotation datasets more effectively. When importing tasks, users can associate each task with a ranking score that reflects its clinical significance or urgency. These scores are then displayed in the task list and can be used to sort and filter tasks dynamically. This functionality is particularly beneficial in risk adjustment workflows where prioritizing complex or high-impact cases is critical. Ranking scores also integrate with the HCC coding workflow, enabling annotators and reviewers to systematically focus on the most relevant cases for validation.
71+
72+
![700image](/assets/images/annotation_lab/7.0.0/6.png)
73+
74+
## IAA for HTML Projects with NER Labels
75+
Inter-Annotator Agreement (IAA) analytics are now supported inside HTML projects with NER labels. This feature ensures more robust validation of annotation accuracy and promotes better alignment among annotators, enhancing overall project quality.
76+
77+
The existing workflow remains unchanged. Once an analytics request is granted, a new "Inter-Annotator Agreement" tab becomes available under the Analytics page in HTML projects, allowing users to access and interpret IAA metrics seamlessly.
78+
79+
- Access the new "Inter-Annotator Agreement" tab from the Analytics page.
80+
- Visualize agreement charts and compare annotations across multiple users.
81+
82+
![700image](/assets/images/annotation_lab/7.0.0/7.png)
83+
84+
## Support for Universal Licenses
85+
86+
Licensing complexity is now significantly reduced through the addition of a universal license key that governs all John Snow Labs libraries and products. Before this update, customers faced the challenge of managing multiple licenses—a separate one for the application and others for using specific functionalities like the Visual or Healthcare features (e.g. in training or preannotation). This complexity often led to additional administrative burdens.
87+
88+
This enhancement simplifies deployments, and license tracking across enterprise environments. It also increases flexibility, boosts efficiency, and provides a seamless experience across all John Snow Labs products. The same license key can be moved to other products – Medical LLMs, Terminology Server, or can be used to experiment with the Healthcare or Visual libraries in Python, as long as it contains a sufficient number of credits.
89+
90+
![700image](/assets/images/annotation_lab/7.0.0/8.png)
91+
92+
## Dataset Sampling for Efficient Model Training
93+
To enhance the training process for **NER (Named Entity Recognition)** projects this version introduces data sampling. In the past, training models on extensive datasets could lead to lengthy training periods or even failures due to the limitations of the existing infrastructure.
94+
This update introduces a new feature that allows users to specify a sampling fraction in the training configuration page, enabling controlled dataset selection. A new parameter has been added to the Training page called Sampling Fraction, where users can specify the portion of the dataset they wish to use for training. The system automatically applies this setting, using only the specified fraction of the dataset for training, thereby optimizing the training process and improving overall efficiency.
95+
96+
For example, if there are 500 tasks in total and the user sets the sampling fraction to 0.5, the system will randomly select 250 tasks (50% of the dataset) for training instead of using the entire dataset.
97+
98+
This enhancement eliminates the need for manual dataset selection, as training can now be initiated based on a randomized subset, optimizing efficiency and resource utilization.
99+
100+
![700image](/assets/images/annotation_lab/7.0.0/9.png)
101+
102+
## Improvements
103+
104+
### Bulk Hide Labels Post-Annotation
105+
106+
Users can now hide multiple labels at once, significantly improving efficiency when working with large documents. Previously, labels had to be hidden individually, making the process tedious and time-consuming. With this update, an eye icon has been added to the Annotations widget, enabling users to hide all annotations for a given Label with a single click. To use this feature, users must switch from Region View to Labels View in the annotation widget.
107+
108+
With this improvement, users can manage labels more effectively, reducing manual effort and enhancing focus during the annotation process.
109+
110+
![700image](/assets/images/annotation_lab/7.0.0/10.gif)
111+
112+
### Improved Zoom Controls
113+
Zooming in Visual NER projects is now more intuitive and controlled:
114+
- Prevents excessive zoom-out, which previously caused annotation regions to overlap or disappear from view. This restriction ensures annotations remain visible and usable during review and editing.
115+
- Restricts zoom-in to avoid unnecessary magnification into white space or low-content areas, which often led to loss of context or inefficient navigation.
116+
- Improved positional control allows annotators to adjust the viewport while zoomed in or out, enabling smoother transitions and more precise annotation without losing sight of the surrounding content.
117+
118+
![700image](/assets/images/annotation_lab/7.0.0/11.gif)
119+
120+
### Bug Fixes
121+
122+
- **Tooltip for Section Names Now Supports Multi-Row Display**
123+
124+
Previously, tooltips for Section Names displayed text in a single row, making long sentences difficult to read and causing words to disappear. This fix enables multi-row tooltips, ensuring better readability and text visibility.
125+
126+
- **'Show Labels Inside Region' Now Works Correctly in NER Projects**
127+
128+
The 'Show Labels Inside Region' setting on the labeling page was not functioning in NER Projects. With this fix, labels now properly show or hide based on the setting, improving task visibility and usability.
129+
130+
- **Removed Unnecessary "check_pre_annotation_status" Logs**
131+
132+
Unnecessary `check_pre_annotation_status` logs were generated in the AnnotationLab pod each time users navigated to the task page, cluttering the logs. This fix eliminates redundant log entries, ensuring cleaner and more efficient logging.
133+
134+
- **Assertion Training Now Works for Side-by-Side Projects**
135+
136+
Assertion training previously failed in Side-by-Side project types, disrupting the training process. This issue has been resolved, ensuring a seamless training experience.
137+
138+
- **Tasks Now Load Correctly in SBA-Enabled Projects**
139+
140+
Users encountered a "Something Went Wrong" error when trying to view tasks in SBA-enabled projects. This issue has been fixed, allowing users to open, view, and annotate tasks without any errors.
141+
142+
- **Fixed Annotation Mismatches in Visual NER and Side-by-Side Projects**
143+
144+
Switching between completions in Visual NER Projects caused annotation inconsistencies. This issue, also present in Side-by-Side Projects, has been resolved to maintain annotation consistency across completions.
145+
146+
- **Templatic Augmentation Task Generation Now Works Without Errors**
147+
148+
Users faced errors when generating tasks via Templatic Augmentation, preventing the creation of augmented tasks. This issue has been fixed, and augmented task generation now works as expected.
149+
150+
- **Corrected Side-by-Side Annotation Alignment for Image and Text**
151+
152+
Annotations were misaligned when comparing images and text in Side-by-Side comparisons, leading to discrepancies. This fix ensures correct annotation alignment across both modalities, improving annotation accuracy.
153+
154+
- **Invalid Hotkeys No Longer Trigger "Something Went Wrong" Page**
155+
156+
Pressing an incorrect hotkey in Image and Text Side-by-Side Projects previously redirected users to a "Something Went Wrong" page. Now, invalid hotkeys simply have no effect, preventing unnecessary disruptions.
157+
158+
- **Fixed "Completion Not Found" Error When Navigating Pages**
159+
160+
Users encountered a "Completion Not Found" error when switching pages in Image and Text Side-by-Side Projects. This issue has been fixed, allowing seamless navigation without errors.
161+
162+
- **Playground Now Opens Properly from Cluster Page**
163+
164+
Users were unable to access the Playground from the Cluster Page due to a launch issue. This has been fixed, and the Playground now opens in a new window as intended.
165+
166+
- **Prevented Duplicate Model Names in Local Models Page**
167+
168+
Users could rename trained models with existing names on the Local Models Page, causing duplicate entries. This fix enforces unique names for each model, preventing naming conflicts.
169+
170+
- **Deleted Chunks No Longer Reappear When Selecting a Label**
171+
172+
Previously deleted chunks were unintentionally reannotated when selecting a label, causing unwanted label restoration. This issue has been resolved, ensuring deleted chunks remain removed unless explicitly re-added.
173+
174+
- **'Keep Label Selected' Setting Now Works as Expected**
175+
176+
The ‘Keep Label Selected After Creating a Region’ setting remained active even when disabled. This has been corrected, ensuring label selection behavior follows user preferences accurately.
177+
178+
179+
</div><div class="prev_ver h3-box" markdown="1">
180+
181+
## Versions
182+
183+
</div>
184+
185+
{%- include docs-annotation-pagination.html -%}

docs/en/alab/byol.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,14 @@ Once a valid license is uploaded, all the licensed (Healthcare, Finance, Legal,
2626

2727
<img class="image image__shadow" src="/assets/images/annotation_lab/4.1.0/add_license.png" style="width:100%;"/>
2828

29+
## Support for Universal Licenses
30+
31+
Licensing complexity is now significantly reduced through the addition of a universal license key that governs all John Snow Labs libraries and products. Before this update, customers faced the challenge of managing multiple licenses—a separate one for the application and others for using specific functionalities like the Visual or Healthcare features (e.g. in training or preannotation). This complexity often led to additional administrative burdens.
32+
33+
This enhancement simplifies deployments, and license tracking across enterprise environments. It also increases flexibility, boosts efficiency, and provides a seamless experience across all John Snow Labs products. The same license key can be moved to other products – Medical LLMs, Terminology Server, or can be used to experiment with the Healthcare or Visual libraries in Python, as long as it contains a sufficient number of credits.
34+
35+
![700image](/assets/images/annotation_lab/7.0.0/8.png)
36+
2937
## Support for Floating Licenses
3038

3139
Generative AI Lab supports floating licenses with different scopes (_ocr: training_, _ocr: inference_, _healthcare: inference_, _healthcare: training_, _finance: inference_, _finance: training_, _legal: inference_, _legal: training_). Depending on the scope of the available license, users can perform model training and/or deploy pre-annotation servers.
@@ -95,3 +103,5 @@ On-prem users will now need to import an application license on the same License
95103
Our commitment remains to provide a powerful and efficient annotation tool while supporting ongoing innovation and improvements. We appreciate your continued support and look forward to introducing more enhancements to Generative AI Lab.
96104

97105
![6110image](/assets/images/annotation_lab/6.11.0/5.png)
106+
107+

0 commit comments

Comments
 (0)