Difference between revisions of "LangChain"

From
Jump to: navigation, search
m
m (Taxes)
Line 161: Line 161:
 
<youtube>uoVqNFDwpX4</youtube>
 
<youtube>uoVqNFDwpX4</youtube>
  
= Taxes =
+
= Weights & Biases (W&B) =
<youtube>68WXvOl1bpk</youtube>
+
* [https://wandb.ai Weights & Biases]
 +
* [https://docs.wandb.ai/guides/prompts/quickstart Prompts Quickstart | WandB]
 +
 
 +
W&B Sweeps and LangChain integration is a feature that allows you to fine-tune LLMs with your own data using W&B Sweeps and LangChain visualization and debugging. W&B Sweeps is a hyperparameter optimization tool that helps you find the best combination of hyperparameters for your model. W&B Sweeps and LangChain integration can:
 +
 
 +
* Create a LangChain model, chain, or agent that uses an LLM as a backend.
 +
* Import WandbTracer from wandb.integration.langchain and use it to continuously log calls to your LangChain object.
 +
* Use W&B dashboard to visualize and debug your LangChain object, such as viewing the prompts, responses, metrics, and errors.
 +
* Use W&B Sweeps to optimize the hyperparameters of your LangChain object, such as the prompt template, the context length, the temperature, and the top-k.
 +
 
 +
Weights & Biases Logging/LLMops is a feature of the Weights & Biases platform, which is a developer-first MLOps platform that provides enterprise-grade, end-to-end MLOps workflow to accelerate ML activities. Weights & Biases Logging/LLMops enables you to optimize LLM operations and prompt engineering with W&B.
 +
 
 +
 
 +
<youtube>Sy-Xp-sdlh0</youtube>
 +
<youtube>gU6Ew-Rscw8</youtube>
  
 
= More =
 
= More =

Revision as of 06:52, 28 June 2023

YouTube ... Quora ...Google search ...Google News ...Bing News


LangChain is a Python framework built around Large Language Model (LLM) that can be used for chatbots, Generative Question-Answering (GQA), summarization, and more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs. LLMs are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you are able to combine them with other sources of computation or knowledge.


Getting Started

Data Independent - tutorial videos

reference videos throughout page



Documents


Long documents

Memory

Emails

Tabular Data

Javascript

Pinecone

Supabase

Water

Visual ChatGPT

Summarization

Hugging Face

LLama

GPT-Index

Comparing Large Language Models (LLM)

Gradio

Filtering LLM

Weights & Biases (W&B)

W&B Sweeps and LangChain integration is a feature that allows you to fine-tune LLMs with your own data using W&B Sweeps and LangChain visualization and debugging. W&B Sweeps is a hyperparameter optimization tool that helps you find the best combination of hyperparameters for your model. W&B Sweeps and LangChain integration can:

  • Create a LangChain model, chain, or agent that uses an LLM as a backend.
  • Import WandbTracer from wandb.integration.langchain and use it to continuously log calls to your LangChain object.
  • Use W&B dashboard to visualize and debug your LangChain object, such as viewing the prompts, responses, metrics, and errors.
  • Use W&B Sweeps to optimize the hyperparameters of your LangChain object, such as the prompt template, the context length, the temperature, and the top-k.

Weights & Biases Logging/LLMops is a feature of the Weights & Biases platform, which is a developer-first MLOps platform that provides enterprise-grade, end-to-end MLOps workflow to accelerate ML activities. Weights & Biases Logging/LLMops enables you to optimize LLM operations and prompt engineering with W&B.


More