Difference between revisions of "LangChain"

From
Jump to: navigation, search
m
m
Line 45: Line 45:
 
<b>LangChain</b> is a [[Python]] framework built around [[Large Language Model (LLM)]] that can be used for chatbots, Generative Question-Answering (GQA), summarization, and more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around [[Large Language Model (LLM)|LLMs]]. [[Large Language Model (LLM)|LLMs]] are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you are able to combine them with other sources of computation or knowledge.
 
<b>LangChain</b> is a [[Python]] framework built around [[Large Language Model (LLM)]] that can be used for chatbots, Generative Question-Answering (GQA), summarization, and more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around [[Large Language Model (LLM)|LLMs]]. [[Large Language Model (LLM)|LLMs]] are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you are able to combine them with other sources of computation or knowledge.
  
 +
LangChain offers a way to interact with and [[fine-tuning]] LLMs on local data, providing a secure and efficient alternative to sending private data through external APIs. It allows companies to extract knowledge from their own data and develop chatbots or other applications that comprehend complex domain-specific information. By combining user input with prompts and interacting with LLMs, LangChain enables seamless integration and enhances the capabilities of applications. It utilizes vector databases as memory, allowing for efficient access to relevant information during the application's execution.
 
   
 
   
  

Revision as of 21:09, 16 August 2023

YouTube ... Quora ...Google search ...Google News ...Bing News


LangChain is a Python framework built around Large Language Model (LLM) that can be used for chatbots, Generative Question-Answering (GQA), summarization, and more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs. LLMs are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you are able to combine them with other sources of computation or knowledge.

LangChain offers a way to interact with and fine-tuning LLMs on local data, providing a secure and efficient alternative to sending private data through external APIs. It allows companies to extract knowledge from their own data and develop chatbots or other applications that comprehend complex domain-specific information. By combining user input with prompts and interacting with LLMs, LangChain enables seamless integration and enhances the capabilities of applications. It utilizes vector databases as memory, allowing for efficient access to relevant information during the application's execution.


Getting Started

Data Independent - tutorial videos

reference videos throughout page



Documents


Long documents

Memory

Emails

Tabular Data

Javascript

Pinecone

Supabase

Water

Visual ChatGPT

Summarization

Hugging Face

LLama

GPT-Index

Comparing Large Language Models (LLM)

Gradio

Filtering LLM

Weights & Biases (W&B)

W&B Sweeps and LangChain integration is a feature that allows you to fine-tune LLMs with your own data using W&B Sweeps and LangChain visualization and debugging. W&B Sweeps is a hyperparameter optimization tool that helps you find the best combination of hyperparameters for your model. W&B Sweeps and LangChain integration can:

  • Create a LangChain model, chain, or agent that uses an LLM as a backend.
  • Import WandbTracer from wandb.integration.langchain and use it to continuously log calls to your LangChain object.
  • Use W&B dashboard to visualize and debug your LangChain object, such as viewing the prompts, responses, metrics, and errors.
  • Use W&B Sweeps to optimize the hyperparameters of your LangChain object, such as the prompt template, the context length, the temperature, and the top-k.

Weights & Biases Logging/LLMops is a feature of the Weights & Biases platform, which is a developer-first MLOps platform that provides enterprise-grade, end-to-end MLOps workflow to accelerate ML activities. Weights & Biases Logging/LLMops enables you to optimize LLM operations and prompt engineering with W&B.


More