chattr
A package that lets you interact with large language models (LLMs), such as GitHub Copilot Chat and OpenAI's GPT 3.5 and 4. The main vehicle is the Shiny app that runs inside the RStudio IDE. Here's an example of what it looks like running within a viewer window:
Figure 1: chattr
Shiny app
Despite the emphasis in this article chattr
It's worth mentioning that it works outside of RStudio (e.g. terminal) due to its integration with RStudio IDE.
Getting started
To get started, download the package from GitHub and call your Shiny app using: chattr_app()
function:
# Install from GitHub
remotes::install_github("mlverse/chattr")
# Run the app
chattr::chattr_app()
#> ── chattr - Available models
#> Select the number of the model you would like to use:
#>
#> 1: GitHub - Copilot Chat - (copilot)
#>
#> 2: OpenAI - Chat Completions - gpt-3.5-turbo (gpt35)
#>
#> 3: OpenAI - Chat Completions - gpt-4 (gpt4)
#>
#> 4: LlamaGPT - ~/ggml-gpt4all-j-v1.3-groovy.bin (llamagpt)
#>
#>
#> Selection:
>
Select the model you want to interact with and the app will open. The following screenshots provide an overview of the various buttons and keyboard shortcuts available in the app.
Figure 2: chattr
's UI
You can start writing your request in the main text box at the top left of the app. Then click the ‘Submit’ button or press Shift+Enter to submit your question.
chattr
Parse the output of the LLM and display the code inside the chunks. We also place three buttons at the top of each chunk. One copies the code to the clipboard, one directly copies it into RStudio's active script, and one copies the code into a new script. To close the app, press the 'Escape' key.
The 'Settings' button opens the defaults used in chat sessions. This may be changed as we see fit. The 'Prompt' text box is additional text that is sent to the LLM as part of the question.
Figure 3: chattr
UI – Settings Page
Custom Settings
chattr
Identifies the model you have set up and includes only that model in the selection menu. For Copilot and OpenAI
chattr
Checks if there is an available authentication token to display in the menu. For example, if you only have OpenAI settings, the prompt would be:
chattr::chattr_app()
#> ── chattr - Available models
#> Select the number of the model you would like to use:
#>
#> 2: OpenAI - Chat Completions - gpt-3.5-turbo (gpt35)
#>
#> 3: OpenAI - Chat Completions - gpt-4 (gpt4)
#>
#> Selection:
>
To avoid seeing the menu chattr_use()
function. Here is an example of setting GPT 4 as default:
library(chattr)
chattr_use("gpt4")
chattr_app()
You can also select your model through the settings. CHATTR_USE
Environment variables.
Advanced customization
You can customize many aspects of your interaction with LLM. To do this chattr_defaults()
function. This feature allows additional prompts to be sent to LLM, displays and sets which model to use, determines whether chat history will be sent to LLM, and displays model-specific arguments.
For example, for OpenAI if you want to change the maximum number of tokens used per response, you can use:
# Default for max_tokens is 1,000
library(chattr)
chattr_use("gpt4")
chattr_defaults(model_arguments = list("max_tokens" = 100))
#>
#> ── chattr ──────────────────────────────────────────────────────────────────────
#>
#> ── Defaults for: Default ──
#>
#> ── Prompt:
#> • {{readLines(system.file('prompt/base.txt', package = 'chattr'))}}
#>
#> ── Model
#> • Provider: OpenAI - Chat Completions
#> • Path/URL: https://api.openai.com/v1/chat/completions
#> • Model: gpt-4
#> • Label: GPT 4 (OpenAI)
#>
#> ── Model Arguments:
#> • max_tokens: 100
#> • temperature: 0.01
#> • stream: TRUE
#>
#> ── Context:
#> Max Data Files: 0
#> Max Data Frames: 0
#> ✔ Chat History
#> ✖ Document contents
To keep your changes as default chattr_defaults_save()
function. This will create a yaml file called 'chattr.yml' by default. When discovered,
chattr
Use this file to load all defaults, including the selected model.
A more detailed description of this feature can be found here: chattr
Website under Edit Prompt Improvements
Beyond the app
In addition to the SHINee app chattr
We offer several different ways to interact with LLM.
- use
chattr()
function - Highlight questions in your script and use them as prompts.
> chattr("how do I remove the legend from a ggplot?")
#> You can remove the legend from a ggplot by adding
#> `theme(legend.position = "none")` to your ggplot code.
A more detailed article can be found at: chattr
Website here.
RStudio add-ons
chattr
It comes with two RStudio add-ons.
Figure 4: chattr
Additional features
By binding these add-on calls to keyboard shortcuts, you can easily open your app without having to write commands each time. See the Keyboard Shortcuts section below to find out how.
chattr
Official website.
Working with local LLMs
Trained open source models that can run on a laptop are now widely available. Instead of integrating each model individually, chattr
work with LlamaGPTJ Chat. A lightweight application that communicates with a variety of local models. Currently LlamaGPTJ-chat is integrated with the following model families:
- GPT-J (ggml and gpt4all models)
- llama (Meta's ggml Vicuna model)
- Mosaic Pretrained Transformer (MPT)
LlamaGPTJ-chat works right in your terminal. chattr
Integrates with your application by starting a 'hidden' terminal session. This will initialize the selected model and allow you to start chatting with that model.
To get started, you need to install LlamaGPTJ-chat and download a compatible model. Detailed instructions can be found here.
chattr
Find the model installed in the location of LlamaGPTJ chat and a specific folder location on your computer. If the installation path does not match the expected location chattr
after that RamaGPT It doesn't appear in the menu. But it's okay. You can still access it using: chattr_use()
:
library(chattr)
chattr_use(
"llamagpt",
path = "[path to compiled program]",
model = "[path to model]"
)
#>
#> ── chattr
#> • Provider: LlamaGPT
#> • Path/URL: [path to compiled program]
#> • Model: [path to model]
#> • Label: GPT4ALL 1.3 (LlamaGPT)
expansion chattr
chattr
Our goal is to make it easy to add new LLM APIs. chattr
User interface (Shiny app and
chattr()
features) and included backends (GPT, Copilot, LLamaGPT). You don't need to add a new backend yourself. chattr
. If you are a package developer and would like to take advantage of the following features: chattr
UI, all you have to do is define it ch_submit()
Method of package.
Two output requirements ch_submit()
am:
-
Send the entire response from the model you want to integrate into the final return value.
chattr
. -
If you're streaming (
stream
isTRUE
), prints the current output as it occurs. Generallycat()
Function call.
Below is a simple example showing how to create a custom method.
chattr
:
library(chattr)
ch_submit.ch_my_llm <- function(defaults,
prompt = NULL,
stream = NULL,
prompt_build = TRUE,
preview = FALSE,
...) {
# Use `prompt_build` to prepend the prompt
if(prompt_build) prompt <- paste0("Use the tidyverse\n", prompt)
# If `preview` is true, return the resulting prompt back
if(preview) return(prompt)
llm_response <- paste0("You said this: \n", prompt)
if(stream) {
cat(">> Streaming:\n")
for(i in seq_len(nchar(llm_response))) {
# If `stream` is true, make sure to `cat()` the current output
cat(substr(llm_response, i, i))
Sys.sleep(0.1)
}
}
# Make sure to return the entire output from the LLM at the end
llm_response
}
chattr_defaults("console", provider = "my llm")
#>
chattr("hello")
#> >> Streaming:
#> You said this:
#> Use the tidyverse
#> hello
chattr("I can use it right from RStudio", prompt_build = FALSE)
#> >> Streaming:
#> You said this:
#> I can use it right from RStudio
For more information, see the function reference page (link here).
Feedback welcome
After you try it out, feel free to submit your thoughts or concerns.
chattr
This is the GitHub repository.