-
Notifications
You must be signed in to change notification settings - Fork 9
modules
@cognosis/platform / Exports
- Chatbot
- ChatbotChuck
- ChatbotNilp
- DumpAndRestoreCogsetTeachings
- IJavascript
- IndexCog
- NLPonDB
- TranscriptToStructuredData
- embeddings
- embeddingsFromTextSearch
- esMappings
- keywordsFromQuery
- logger
- mapPromptTemplate
- mapreduce_question_text
- mapreduce_summary
- minGenerate
- mysqlQuery
- nlpcloud_generate
- openai_generate
- post_message_filtering
- promptReducer
- promptTemplate
- questionAndAnswer
- read
- reducePromptTemplate
- send
- sendread
- splitPromptTemplateByLinesOfTokens
- splitPromptTemplateByTokens
- stable_diffusion
- storeEmbeddings
- testSession
- translateQuerySpaceToAnswerSpace
- wf_esindex
- wf_essearch
- xNLPonDB
Ƭ llm_models: openai_models | nlpcloud_models
Ƭ nlpcloud_models: "gpt-neox-20b"
Ƭ openai_models: "gpt-3" | "text-curie-001"
• Const getOutputBuffer: QueryDefinition<string, []>
• Const userInputSignal: SignalDefinition<[UserInput]>
• Const userOutputListenerSignal: SignalDefinition<[{ listener_wf: string ; target_wf: string }]>
• Const userOutputSignal: SignalDefinition<[UserOutput]>
▸ Chatbot(personality, context_length, user, message, session?, runCogs?): Promise<string>
Function
Invocation of chatbot function to generate a response to a message
Example
Example of a chatbot invocation
const response = await Chatbot( {name: "Gandalf", personaltity: "Wizard. Good, but unpredictable. Extremely powerful and wise."}, 50, 'user555555', 'Hello, there!' );| Name | Type | Default value | Description |
|---|---|---|---|
personality |
Personality |
undefined |
- |
context_length |
number |
undefined |
Number of previous messages to use as context |
user |
string |
undefined |
User to respond to |
message |
string |
undefined |
Message to respond to |
session |
ChatSession |
undefined |
Chat session to use (default: new session) |
runCogs |
boolean |
true |
Whether to run cogs (default: true) |
Promise<string>
▸ ChatbotChuck(user, message): Promise<string>
Invocation of Chatbot with the personality Chuck
Example
chatbot('anon55', 'Hello, how are you?')| Name | Type | Description |
|---|---|---|
user |
string |
The user |
message |
string |
The message |
Promise<string>
The response
▸ ChatbotNilp(user, message): Promise<string>
Invocation of Chatbot with the personality Nilp
Example
chatbot('anon55', 'Hello, how are you?')| Name | Type | Description |
|---|---|---|
user |
string |
The user |
message |
string |
The message |
Promise<string>
The response
▸ DumpAndRestoreCogsetTeachings(): Promise<void>
Promise<void>
▸ IJavascript(query): Promise<string>
GPT-3 can use a IPython/Jupyter notebook "memetic proxy" to follow instructions while writing code to solve a problem. This is a workflow that uses the memetic proxy to solve a problem, as described passed as a string.
Example
const result = await executeJavascriptNotebook('The number of legs a spider has multiplied by the estimated population in France');| Name | Type | Description |
|---|---|---|
query |
string |
Instructions to follow which GPT-3 will try to use a Javascript Notebook to compose a solution |
Promise<string>
workflows/application/ijavascript.ts:15
▸ IndexCog(cog): Promise<void>
| Name | Type |
|---|---|
cog |
Cog |
Promise<void>
▸ NLPonDB(query): Promise<any>
| Name | Type |
|---|---|
query |
string |
Promise<any>
▸ TranscriptToStructuredData(transcript): Promise<string>
Takes a transcription of a call and returns information about the call in JSON
Example
const callInfo = JSON.stringify( await getCallInfo('Caller: Hello, there!') );| Name | Type | Description |
|---|---|---|
transcript |
string |
Call transcription from call |
Promise<string>
JSON string with information about the call
workflows/application/call-transcription.ts:10
▸ embeddings(sentences): Promise<[string, number[]][]>
| Name | Type |
|---|---|
sentences |
string[] |
Promise<[string, number[]][]>
▸ embeddingsFromTextSearch(index, text, k): Promise<any[]>
| Name | Type |
|---|---|
index |
string |
text |
string |
k |
number |
Promise<any[]>
▸ esMappings(index, doc): Promise<void>
| Name | Type |
|---|---|
index |
string |
doc |
any |
Promise<void>
▸ keywordsFromQuery(query): Promise<string>
| Name | Type |
|---|---|
query |
string |
Promise<string>
▸ logger(msg): Promise<void>
| Name | Type |
|---|---|
msg |
string |
Promise<void>
▸ mapPromptTemplate(text, primarySummarizeTemplate?): Promise<string[]>
mapPromptTemplate
| Name | Type | Default value | Description |
|---|---|---|---|
text |
string |
undefined |
Input text to be processed |
primarySummarizeTemplate |
string |
'Analyze the following text for a detailed summary.\n\n{{{chunk}}}\n\nProvide a detailed summary:' |
Prompt template run on each chunk of text |
Promise<string[]>
List of completions from running prompt primarySummarizeTemplate on each chunk of text
▸ mapreduce_question_text(text, question, primarySummarizeTemplate?, reduceSummarizeTemplate?): Promise<string>
mapreduce_question_text
| Name | Type | Description |
|---|---|---|
text |
string |
Text to be processed |
question |
string |
- |
primarySummarizeTemplate |
string |
- |
reduceSummarizeTemplate |
string |
- |
Promise<string>
A promise that resolves to the final answer
▸ mapreduce_summary(text, primarySummarizeTemplate?, reduceSummarizeTemplate?): Promise<string>
| Name | Type | Default value | Description |
|---|---|---|---|
text |
string |
undefined |
Text to summarize which is potentially larger than the context-size of the LLM model |
primarySummarizeTemplate |
string |
'Analyze the following text for a detailed summary.\n\n{{{chunk}}}\n\nProvide a detailed summary:' |
Template to use for the map step summary |
reduceSummarizeTemplate |
string |
'These are a series of summaries that you are going to summarize:\n\n{{{chunk}}}\n\nProvide a detailed summary in the 3rd party passive voice, removing duplicate information:' |
Teomplate to use for the reduce step summary |
Promise<string>
A summary of the text
▸ minGenerate(prompt, minLength, maxLength, temperature, endSequence?, model?): Promise<string>
Function
minGenerate
Description
A workflow that will generate text using sensible defaults to a sensible default LLM
| Name | Type | Default value |
|---|---|---|
prompt |
string |
undefined |
minLength |
number |
undefined |
maxLength |
number |
undefined |
temperature |
number |
undefined |
endSequence |
null | string
|
null |
model |
llm_models |
'gpt-3' |
Promise<string>
▸ mysqlQuery<T>(dbhost, dbuser, dbpassword, dbname, sql, parameters): Promise<T[]>
Function
mysqlQuery
Description
A workflow that simply calls an activity
| Name |
|---|
T |
| Name | Type |
|---|---|
dbhost |
string |
dbuser |
string |
dbpassword |
string |
dbname |
string |
sql |
string |
parameters |
any[] |
Promise<T[]>
▸ nlpcloud_generate(prompt, minLength?, maxLength?, lengthNoInput?, endSequence?, removeInput?, doSample, numBeams, earlyStopping, noRepeatNgramSize, numReturnSequences, topK, topP, temperature, repetitionPenalty, lengthPenalty, badWords, removeEndSequence): Promise<string>
Function
nlpcloud_generate
Description
A workflow that will generate text using the NLP Cloud API
| Name | Type | Default value |
|---|---|---|
prompt |
string |
undefined |
minLength |
number |
10 |
maxLength |
number |
20 |
lengthNoInput |
null | boolean
|
null |
endSequence |
null | string
|
null |
removeInput |
boolean |
true |
doSample |
null | boolean
|
undefined |
numBeams |
null | number
|
undefined |
earlyStopping |
null | boolean
|
undefined |
noRepeatNgramSize |
null | number
|
undefined |
numReturnSequences |
null | number
|
undefined |
topK |
null | number
|
undefined |
topP |
null | number
|
undefined |
temperature |
null | number
|
undefined |
repetitionPenalty |
null | number
|
undefined |
lengthPenalty |
null | number
|
undefined |
badWords |
null | boolean
|
undefined |
removeEndSequence |
null | boolean
|
undefined |
Promise<string>
▸ openai_generate(prompt, min_length, max_length, temperature, top_p): Promise<string>
Function
openai_generate
Description
A workflow that will generate text using the OpenAI API
| Name | Type |
|---|---|
prompt |
string |
min_length |
number |
max_length |
number |
temperature |
number |
top_p |
number |
Promise<string>
▸ post_message_filtering(session): Promise<string>
| Name | Type |
|---|---|
session |
ChatSession |
Promise<string>
▸ promptReducer(inPrompt, variables, preamble, instructions): Promise<string>
| Name | Type |
|---|---|
inPrompt |
string |
variables |
any |
preamble |
string |
instructions |
string |
Promise<string>
▸ promptTemplate<T>(template, variables, minLength?, maxLength?, temperature?, model?): Promise<string>
| Name |
|---|
T |
| Name | Type | Default value |
|---|---|---|
template |
string |
undefined |
variables |
T |
undefined |
minLength |
number |
1 |
maxLength |
number |
50 |
temperature |
number |
0.0 |
model |
"gpt-3" | "gpt-neox-20b"
|
'gpt-neox-20b' |
Promise<string>
▸ questionAndAnswer(index, query): Promise<QandA>
| Name | Type |
|---|---|
index |
string |
query |
string |
Promise<QandA>
▸ read(wfid): Promise<string>
| Name | Type |
|---|---|
wfid |
string |
Promise<string>
▸ reducePromptTemplate(completions, reduceTemplate?): Promise<string>
reducePromptTemplate
| Name | Type | Default value | Description |
|---|---|---|---|
completions |
string[] |
undefined |
Array of completions, usually output from mapPromptTemplate |
reduceTemplate |
string |
'These are a series of summaries that you are going to summarize:\n\n{{{chunk}}}\n\nProvide a detailed summary, but removing duplicate information:' |
Prompt template run on completions to reduce them to a single summary |
Promise<string>
Final return value of the reduce prompt templates being run on completions from the map prompt templates.
▸ send(wfid, message): Promise<void>
| Name | Type |
|---|---|
wfid |
string |
message |
FrameInput |
Promise<void>
▸ sendread(wfid, message): Promise<string>
| Name | Type |
|---|---|
wfid |
string |
message |
Frame |
Promise<string>
▸ splitPromptTemplateByLinesOfTokens(data, template, minLength?, maxLength?, temperature?): Promise<[string, string, number[]][]>
| Name | Type | Default value |
|---|---|---|
data |
string |
undefined |
template |
string |
undefined |
minLength |
number |
1 |
maxLength |
number |
50 |
temperature |
number |
0.0 |
Promise<[string, string, number[]][]>
▸ splitPromptTemplateByTokens(data, template, minLength?, maxLength?, temperature?): Promise<[string, string][]>
Function
splitPromptTemplateByTokens
| Name | Type | Default value |
|---|---|---|
data |
string |
undefined |
template |
string |
undefined |
minLength |
number |
1 |
maxLength |
number |
50 |
temperature |
number |
0.0 |
Promise<[string, string][]>
▸ stable_diffusion(prompt): Promise<string>
| Name | Type |
|---|---|
prompt |
string |
Promise<string>
▸ storeEmbeddings(sentences, index, documents, alsoTokenize?): Promise<string>
| Name | Type | Default value |
|---|---|---|
sentences |
string[] |
undefined |
index |
string |
undefined |
documents |
any[] |
undefined |
alsoTokenize |
boolean |
false |
Promise<string>
▸ testSession(first_message): Promise<void>
| Name | Type |
|---|---|
first_message |
Frame |
Promise<void>
▸ translateQuerySpaceToAnswerSpace(query): Promise<string>
| Name | Type |
|---|---|
query |
string |
Promise<string>
▸ wf_esindex(pindex, pdocument): Promise<void>
Function
wf_esindex
Description
A workflow that will index a document into Elasticsearch
| Name | Type |
|---|---|
pindex |
string |
pdocument |
any |
Promise<void>
▸ wf_essearch(index, query): Promise<any>
Function
wf_essearch
Description
A workflow that will search Elasticsearch
| Name | Type |
|---|---|
index |
string |
query |
any |
Promise<any>
▸ xNLPonDB(host, username, password, dbname, query): Promise<any>
Function
xNLPonDB
Description
Takes a natural language query and translates it into SQL.
| Name | Type | Description |
|---|---|---|
host |
string |
- |
username |
string |
- |
password |
string |
- |
dbname |
string |
- |
query |
string |
The natural language query to parse. |
Promise<any>
- The results of the SQL query.