Skip to content

Commit 2c84d6c

Browse files
authored
Merge branch 'master' into openai-embeddings-not-respecting-chunk-size
2 parents d076b36 + 98c357b commit 2c84d6c

File tree

78 files changed

+1227
-651
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

78 files changed

+1227
-651
lines changed

docs/api_reference/conf.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -275,3 +275,7 @@ def skip_private_members(app, what, name, obj, skip, options):
275275
html_context["READTHEDOCS"] = True
276276

277277
master_doc = "index"
278+
279+
# If a signature’s length in characters exceeds 60,
280+
# each parameter within the signature will be displayed on an individual logical line
281+
maximum_signature_line_length = 60

docs/docs/changes/changelog/core.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,5 +6,5 @@
66

77
- `BaseChatModel` methods `__call__`, `call_as_llm`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.invoke` instead.
88
- `BaseChatModel` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.ainvoke` instead.
9-
- `BaseLLM` methods `__call__, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
9+
- `BaseLLM` methods `__call__`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
1010
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.

docs/docs/concepts/runnables.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -126,7 +126,7 @@ Please see the [Configurable Runnables](#configurable-runnables) section for mor
126126
LangChain will automatically try to infer the input and output types of a Runnable based on available information.
127127

128128
Currently, this inference does not work well for more complex Runnables that are built using [LCEL](/docs/concepts/lcel) composition, and the inferred input and / or output types may be incorrect. In these cases, we recommend that users override the inferred input and output types using the `with_types` method ([API Reference](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_types
129-
).
129+
)).
130130

131131
## RunnableConfig
132132

@@ -194,7 +194,7 @@ In Python 3.11 and above, this works out of the box, and you do not need to do a
194194
In Python 3.9 and 3.10, if you are using **async code**, you need to manually pass the `RunnableConfig` through to the `Runnable` when invoking it.
195195

196196
This is due to a limitation in [asyncio's tasks](https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task) in Python 3.9 and 3.10 which did
197-
not accept a `context` argument).
197+
not accept a `context` argument.
198198

199199
Propagating the `RunnableConfig` manually is done like so:
200200

docs/docs/how_to/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -351,7 +351,7 @@ LangSmith allows you to closely trace, monitor and evaluate your LLM application
351351
It seamlessly integrates with LangChain and LangGraph, and you can use it to inspect and debug individual steps of your chains and agents as you build.
352352

353353
LangSmith documentation is hosted on a separate site.
354-
You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/how_to_guides/), but we'll highlight a few sections that are particularly
354+
You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/), but we'll highlight a few sections that are particularly
355355
relevant to LangChain below:
356356

357357
### Evaluation

docs/docs/integrations/chat/bedrock.ipynb

Lines changed: 389 additions & 366 deletions
Large diffs are not rendered by default.

docs/docs/integrations/chat/openai.ipynb

Lines changed: 72 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -408,7 +408,7 @@
408408
"\n",
409409
":::\n",
410410
"\n",
411-
"OpenAI supports a [Responses](https://platform.openai.com/docs/guides/responses-vs-chat-completions) API that is oriented toward building [agentic](/docs/concepts/agents/) applications. It includes a suite of [built-in tools](https://platform.openai.com/docs/guides/tools?api-mode=responses), including web and file search. It also supports management of [conversation state](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses), allowing you to continue a conversational thread without explicitly passing in previous messages.\n",
411+
"OpenAI supports a [Responses](https://platform.openai.com/docs/guides/responses-vs-chat-completions) API that is oriented toward building [agentic](/docs/concepts/agents/) applications. It includes a suite of [built-in tools](https://platform.openai.com/docs/guides/tools?api-mode=responses), including web and file search. It also supports management of [conversation state](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses), allowing you to continue a conversational thread without explicitly passing in previous messages, as well as the output from [reasoning processes](https://platform.openai.com/docs/guides/reasoning?api-mode=responses).\n",
412412
"\n",
413413
"`ChatOpenAI` will route to the Responses API if one of these features is used. You can also specify `use_responses_api=True` when instantiating `ChatOpenAI`.\n",
414414
"\n",
@@ -1056,6 +1056,77 @@
10561056
"print(second_response.text())"
10571057
]
10581058
},
1059+
{
1060+
"cell_type": "markdown",
1061+
"id": "67bf5bd2-0935-40a0-b1cd-c6662b681d4b",
1062+
"metadata": {},
1063+
"source": [
1064+
"### Reasoning output\n",
1065+
"\n",
1066+
"Some OpenAI models will generate separate text content illustrating their reasoning process. See OpenAI's [reasoning documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses) for details.\n",
1067+
"\n",
1068+
"OpenAI can return a summary of the model's reasoning (although it doesn't expose the raw reasoning tokens). To configure `ChatOpenAI` to return this summary, specify the `reasoning` parameter:"
1069+
]
1070+
},
1071+
{
1072+
"cell_type": "code",
1073+
"execution_count": 2,
1074+
"id": "8d322f3a-0732-45ab-ac95-dfd4596e0d85",
1075+
"metadata": {},
1076+
"outputs": [
1077+
{
1078+
"data": {
1079+
"text/plain": [
1080+
"'3^3 = 3 × 3 × 3 = 27.'"
1081+
]
1082+
},
1083+
"execution_count": 2,
1084+
"metadata": {},
1085+
"output_type": "execute_result"
1086+
}
1087+
],
1088+
"source": [
1089+
"from langchain_openai import ChatOpenAI\n",
1090+
"\n",
1091+
"reasoning = {\n",
1092+
" \"effort\": \"medium\", # 'low', 'medium', or 'high'\n",
1093+
" \"summary\": \"auto\", # 'detailed', 'auto', or None\n",
1094+
"}\n",
1095+
"\n",
1096+
"llm = ChatOpenAI(\n",
1097+
" model=\"o4-mini\",\n",
1098+
" use_responses_api=True,\n",
1099+
" model_kwargs={\"reasoning\": reasoning},\n",
1100+
")\n",
1101+
"response = llm.invoke(\"What is 3^3?\")\n",
1102+
"\n",
1103+
"# Output\n",
1104+
"response.text()"
1105+
]
1106+
},
1107+
{
1108+
"cell_type": "code",
1109+
"execution_count": 3,
1110+
"id": "d7dcc082-b7c8-41b7-a5e2-441b9679e41b",
1111+
"metadata": {},
1112+
"outputs": [
1113+
{
1114+
"name": "stdout",
1115+
"output_type": "stream",
1116+
"text": [
1117+
"**Calculating power of three**\n",
1118+
"\n",
1119+
"The user is asking for the result of 3 to the power of 3, which I know is 27. It's a straightforward question, so I’ll keep my answer concise: 27. I could explain that this is the same as multiplying 3 by itself twice: 3 × 3 × 3 equals 27. However, since the user likely just needs the answer, I’ll simply respond with 27.\n"
1120+
]
1121+
}
1122+
],
1123+
"source": [
1124+
"# Reasoning\n",
1125+
"reasoning = response.additional_kwargs[\"reasoning\"]\n",
1126+
"for block in reasoning[\"summary\"]:\n",
1127+
" print(block[\"text\"])"
1128+
]
1129+
},
10591130
{
10601131
"cell_type": "markdown",
10611132
"id": "57e27714",

docs/docs/integrations/providers/aws.mdx

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,9 +35,18 @@ from langchain_aws import ChatBedrock
3535
```
3636

3737
### Bedrock Converse
38-
AWS has recently released the Bedrock Converse API which provides a unified conversational interface for Bedrock models. This API does not yet support custom models. You can see a list of all [models that are supported here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html). To improve reliability the ChatBedrock integration will switch to using the Bedrock Converse API as soon as it has feature parity with the existing Bedrock API. Until then a separate [ChatBedrockConverse](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) integration has been released.
38+
AWS Bedrock maintains a [Converse API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html)
39+
that provides a unified conversational interface for Bedrock models. This API does not
40+
yet support custom models. You can see a list of all
41+
[models that are supported here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html).
3942

40-
We recommend using `ChatBedrockConverse` for users who do not need to use custom models. See the [docs](/docs/integrations/chat/bedrock/#bedrock-converse-api) and [API reference](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) for more detail.
43+
:::info
44+
45+
We recommend the Converse API for users who do not need to use custom models. It can be accessed using [ChatBedrockConverse](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html).
46+
47+
:::
48+
49+
See a [usage example](/docs/integrations/chat/bedrock).
4150

4251
```python
4352
from langchain_aws import ChatBedrockConverse
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
# Smabbler
2+
> Smabbler’s graph-powered platform boosts AI development by transforming data into a structured knowledge foundation.
3+
4+
# Galaxia
5+
6+
> Galaxia Knowledge Base is an integrated knowledge base and retrieval mechanism for RAG. In contrast to standard solution, it is based on Knowledge Graphs built using symbolic NLP and Knowledge Representation solutions. Provided texts are analysed and transformed into Graphs containing text, language and semantic information. This rich structure allows for retrieval that is based on semantic information, not on vector similarity/distance.
7+
8+
Implementing RAG using Galaxia involves first uploading your files to [Galaxia](https://beta.cloud.smabbler.com/home), analyzing them there and then building a model (knowledge graph). When the model is built, you can use `GalaxiaRetriever` to connect to the API and start retrieving.
9+
10+
More information: [docs](https://smabbler.gitbook.io/smabbler)
11+
12+
## Installation
13+
```
14+
pip install langchain-galaxia-retriever
15+
```
16+
17+
## Usage
18+
19+
```
20+
from langchain_galaxia_retriever.retriever import GalaxiaRetriever
21+
22+
gr = GalaxiaRetriever(
23+
api_url="beta.api.smabbler.com",
24+
api_key="<key>",
25+
knowledge_base_id="<knowledge_base_id>",
26+
n_retries=10,
27+
wait_time=5,
28+
)
29+
30+
result = gr.invoke('<test question>')
31+
print(result)
32+
Lines changed: 213 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,213 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "raw",
5+
"id": "2af1fec5-4ca6-4167-8ee1-13314aac3258",
6+
"metadata": {
7+
"vscode": {
8+
"languageId": "raw"
9+
}
10+
},
11+
"source": [
12+
"---\n",
13+
"sidebar_label: Galaxia\n",
14+
"---"
15+
]
16+
},
17+
{
18+
"cell_type": "markdown",
19+
"id": "1d7d6cbc-4373-4fb5-94dd-acd610165452",
20+
"metadata": {},
21+
"source": [
22+
"# Galaxia Retriever\n",
23+
"\n",
24+
"Galaxia is GraphRAG solution, which automates document processing, knowledge base (Graph Language Model) creation and retrieval:\n",
25+
"[galaxia-rag](https://smabbler.gitbook.io/smabbler/api-rag/smabblers-api-rag)\n",
26+
"\n",
27+
"To use Galaxia first upload your texts and create a Graph Language Model here: [smabbler-cloud](https://beta.cloud.smabbler.com)\n",
28+
"\n",
29+
"After the model is built and activated, you will be able to use this integration to retrieve what you need.\n",
30+
"\n",
31+
"The module repository is located here: [github](https://github.com/rrozanski-smabbler/galaxia-langchain)\n",
32+
"\n",
33+
"### Integration details\n",
34+
"| Retriever | Self-host | Cloud offering | Package |\n",
35+
"| :--- | :--- | :---: | :---: |\n",
36+
"[Galaxia Retriever](https://github.com/rrozanski-smabbler/galaxia-langchain) | ❌ | ✅ | __langchain-galaxia-retriever__ |"
37+
]
38+
},
39+
{
40+
"cell_type": "markdown",
41+
"id": "82fa1c05-c205-4429-a74c-e6c81c4e8611",
42+
"metadata": {},
43+
"source": [
44+
"## Setup\n",
45+
"Before you can retrieve anything you need to create your Graph Language Model here: [smabbler-cloud](https://beta.cloud.smabbler.com)\n",
46+
"\n",
47+
"following these 3 simple steps: [rag-instruction](https://smabbler.gitbook.io/smabbler/api-rag/build-rag-model-in-3-steps)\n",
48+
"\n",
49+
"Don't forget to activate the model after building it!"
50+
]
51+
},
52+
{
53+
"cell_type": "markdown",
54+
"id": "91897867-eb39-4c3b-8df8-5427043ecdcd",
55+
"metadata": {},
56+
"source": [
57+
"### Installation\n",
58+
"The retriever is implemented in the following package: [pypi](https://pypi.org/project/langchain-galaxia-retriever/)"
59+
]
60+
},
61+
{
62+
"cell_type": "code",
63+
"execution_count": null,
64+
"id": "ceca36f2-013c-4b28-81fe-8808d0cf6419",
65+
"metadata": {},
66+
"outputs": [],
67+
"source": [
68+
"%pip install -qU langchain-galaxia-retriever"
69+
]
70+
},
71+
{
72+
"cell_type": "markdown",
73+
"id": "019e0e50-5e66-440b-9cf1-d21b4009bf13",
74+
"metadata": {},
75+
"source": [
76+
"## Instantiation"
77+
]
78+
},
79+
{
80+
"cell_type": "code",
81+
"execution_count": null,
82+
"id": "c7188217-4b26-4201-b15a-b7a5f263f815",
83+
"metadata": {},
84+
"outputs": [],
85+
"source": [
86+
"from langchain_galaxia_retriever.retriever import GalaxiaRetriever\n",
87+
"\n",
88+
"gr = GalaxiaRetriever(\n",
89+
" api_url=\"beta.api.smabbler.com\",\n",
90+
" api_key=\"<key>\", # you can find it here: https://beta.cloud.smabbler.com/user/account\n",
91+
" knowledge_base_id=\"<knowledge_base_id>\", # you can find it in https://beta.cloud.smabbler.com , in the model table\n",
92+
" n_retries=10,\n",
93+
" wait_time=5,\n",
94+
")"
95+
]
96+
},
97+
{
98+
"cell_type": "markdown",
99+
"id": "02d288a5-4f76-472e-9a60-eea8e6b8dc7a",
100+
"metadata": {},
101+
"source": [
102+
"## Usage"
103+
]
104+
},
105+
{
106+
"cell_type": "code",
107+
"execution_count": null,
108+
"id": "5f79e03f-77a6-4eb6-b41d-f3da2f897654",
109+
"metadata": {},
110+
"outputs": [],
111+
"source": [
112+
"result = gr.invoke(\"<test question>\")\n",
113+
"print(result)"
114+
]
115+
},
116+
{
117+
"cell_type": "markdown",
118+
"id": "ffb2a595-a901-477a-a374-efd091bc1c9a",
119+
"metadata": {},
120+
"source": [
121+
"## Use within a chain"
122+
]
123+
},
124+
{
125+
"cell_type": "code",
126+
"execution_count": null,
127+
"id": "9c2e2394-ca33-47be-a851-551b4216daea",
128+
"metadata": {},
129+
"outputs": [],
130+
"source": [
131+
"# | output: false\n",
132+
"# | echo: false\n",
133+
"\n",
134+
"from langchain_openai import ChatOpenAI\n",
135+
"\n",
136+
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
137+
]
138+
},
139+
{
140+
"cell_type": "code",
141+
"execution_count": null,
142+
"id": "ed8699d6-d65d-40ea-8c58-8d809cc512cf",
143+
"metadata": {},
144+
"outputs": [],
145+
"source": [
146+
"from langchain_core.output_parsers import StrOutputParser\n",
147+
"from langchain_core.prompts import ChatPromptTemplate\n",
148+
"from langchain_core.runnables import RunnablePassthrough\n",
149+
"\n",
150+
"prompt = ChatPromptTemplate.from_template(\n",
151+
" \"\"\"Answer the question based only on the context provided.\n",
152+
"\n",
153+
"Context: {context}\n",
154+
"\n",
155+
"Question: {question}\"\"\"\n",
156+
")\n",
157+
"\n",
158+
"\n",
159+
"def format_docs(docs):\n",
160+
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
161+
"\n",
162+
"\n",
163+
"chain = (\n",
164+
" {\"context\": gr | format_docs, \"question\": RunnablePassthrough()}\n",
165+
" | prompt\n",
166+
" | llm\n",
167+
" | StrOutputParser()\n",
168+
")"
169+
]
170+
},
171+
{
172+
"cell_type": "code",
173+
"execution_count": null,
174+
"id": "f9b944d7-8800-4926-b1ce-fcdc52ecda1c",
175+
"metadata": {},
176+
"outputs": [],
177+
"source": [
178+
"chain.invoke(\"<test question>\")"
179+
]
180+
},
181+
{
182+
"cell_type": "markdown",
183+
"id": "11b5c9a5-0a66-415f-98f8-f12080cad30a",
184+
"metadata": {},
185+
"source": [
186+
"## API reference\n",
187+
"\n",
188+
"For more information about Galaxia Retriever check its implementation on github [github](https://github.com/rrozanski-smabbler/galaxia-langchain)"
189+
]
190+
}
191+
],
192+
"metadata": {
193+
"kernelspec": {
194+
"display_name": "Python 3 (ipykernel)",
195+
"language": "python",
196+
"name": "python3"
197+
},
198+
"language_info": {
199+
"codemirror_mode": {
200+
"name": "ipython",
201+
"version": 3
202+
},
203+
"file_extension": ".py",
204+
"mimetype": "text/x-python",
205+
"name": "python",
206+
"nbconvert_exporter": "python",
207+
"pygments_lexer": "ipython3",
208+
"version": "3.11.7"
209+
}
210+
},
211+
"nbformat": 4,
212+
"nbformat_minor": 5
213+
}

0 commit comments

Comments
 (0)