Skip to content

Commit ddf881d

Browse files
authored
Merge pull request #8 from flashvayne/develop
update
2 parents c136466 + 41146c9 commit ddf881d

File tree

12 files changed

+364
-27
lines changed

12 files changed

+364
-27
lines changed

README.md

Lines changed: 45 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -7,34 +7,69 @@ Use chatgpt in springboot project easily.
77
This starter is based on Openai Official Apis.
88

99
## Usage
10-
1.Add maven dependency.
10+
### 1.Add maven dependency.
1111
```pom
1212
<dependency>
1313
<groupId>io.github.flashvayne</groupId>
1414
<artifactId>chatgpt-spring-boot-starter</artifactId>
15-
<version>1.0.1</version>
15+
<version>1.0.2</version>
1616
</dependency>
1717
```
18-
2.Set chatgpt properties in your application.yml
18+
### 2.Set chatgpt properties in your application.yml
19+
1920
```yml
2021
chatgpt:
21-
api-key: xxxxxxxxxxx #your api-key. It can be generated in the link https://platform.openai.com/account/api-keys
22-
# some properties as below have default values. Of course, you can change them.
23-
# max-tokens: 300 # The maximum number of tokens to generate in the completion.The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
24-
# model: text-davinci-003 # GPT-3 models can understand and generate natural language. We offer four main models with different levels of power suitable for different tasks. Davinci is the most capable model, and Ada is the fastest.
25-
# temperature: 0.0 # What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.We generally recommend altering this or top_p but not both.
26-
# top-p: 1.0 # An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.We generally recommend altering this or temperature but not both.
22+
api-key: xxxxxxxxxxx #api-key. It can be generated here https://platform.openai.com/account/api-keys
23+
# some properties as below have default values. For descriptions of these fields, please refer to https://platform.openai.com/docs/api-reference/completions/create and https://platform.openai.com/docs/api-reference/chat/create
24+
# url: https://api.openai.com/v1/completions
25+
# model: text-davinci-003
26+
# max-tokens: 500
27+
# temperature: 0.0
28+
# top-p: 1.0
29+
# multi:
30+
# url: https://api.openai.com/v1/chat/completions
31+
# model: gpt-3.5-turbo
32+
# max-tokens: 500
33+
# temperature: 0.0
34+
# top-p: 1.0
2735
```
28-
3.Inject bean ChatgptService anywhere you require it, and invoke its method to send message to chatgpt and get the response.
36+
### 3.Inject bean ChatgptService anywhere you require it, and invoke its method to send message to chatgpt and get the response.
37+
#### 3.1 Single message
2938
```java
3039
@Autowired
3140
private ChatgptService chatgptService;
3241

3342
public void test(){
43+
String responseMessage = chatgptService.multiChat(Arrays.asList(new MultiChatMessage("user","how are you?")));
44+
System.out.print(responseMessage); //\n\nAs an AI language model, I don't have feelings, but I'm functioning well. Thank you for asking. How can I assist you today?
45+
}
46+
47+
public void test2(){
3448
String responseMessage = chatgptService.sendMessage("how are you");
3549
System.out.print(responseMessage); //I'm doing well, thank you. How about you?
3650
}
3751
```
52+
#### 3.2 Multi message. You can take a series of messages (including the conversation history) as input , and return a response message as output.
53+
```java
54+
@Autowired
55+
private ChatgptService chatgptService;
56+
57+
public void testMultiChat(){
58+
List<MultiChatMessage> messages = Arrays.asList(
59+
new MultiChatMessage("system","You are a helpful assistant."),
60+
new MultiChatMessage("user","Who won the world series in 2020?"),
61+
new MultiChatMessage("assistant","The Los Angeles Dodgers won the World Series in 2020."),
62+
new MultiChatMessage("user","Where was it played?"));
63+
String responseMessage = chatgptService.multiChat(messages);
64+
System.out.print(responseMessage); //The 2020 World Series was played at Globe Life Field in Arlington, Texas.
65+
}
66+
```
67+
Messages must be an array of message objects, where each object has a role (either "system", "user", or "assistant") and content (the content of the message). Conversations can be as short as 1 message or fill many pages.
68+
69+
+ The system message helps set the behavior of the assistant. In the example above, the assistant was instructed with "You are a helpful assistant."
70+
+ The user messages help instruct the assistant. They can be generated by the end users of an application, or set by a developer as an instruction.
71+
+ The assistant messages help store prior responses. They can also be written by a developer to help give examples of desired behavior.
72+
For more details, please refer to [chat format](https://platform.openai.com/docs/guides/chat/introduction)
3873

3974
## Demo project:
4075
[demo-chatgpt-spring-boot-starter](https://github.com/flashvayne/demo-chatgpt-spring-boot-starter)

pom.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
<groupId>io.github.flashvayne</groupId>
88
<artifactId>chatgpt-spring-boot-starter</artifactId>
9-
<version>1.0.1</version>
9+
<version>1.0.2</version>
1010

1111
<name>chatgpt-spring-boot-starter</name>
1212
<description>a starter to use chatgpt in springboot project easily</description>
Lines changed: 44 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,27 @@
11
package io.github.flashvayne.chatgpt.dto;
22

3+
import com.fasterxml.jackson.annotation.JsonInclude;
34
import com.fasterxml.jackson.annotation.JsonProperty;
45
import lombok.AllArgsConstructor;
56
import lombok.Data;
7+
import lombok.NoArgsConstructor;
68

9+
import java.util.Map;
10+
11+
/**
12+
* ChatRequest is used to construct request body.
13+
* For descriptions of all fields, please refer to <a href="https://platform.openai.com/docs/api-reference/completions/create">Create completion</a>
14+
*/
715
@Data
16+
@NoArgsConstructor
817
@AllArgsConstructor
18+
@JsonInclude(JsonInclude.Include.NON_NULL)
919
public class ChatRequest {
1020

1121
private String model;
1222

1323
private String prompt;
1424

15-
/**
16-
* The maximum number of tokens to generate in the completion.
17-
* The token count of your prompt plus max_tokens cannot exceed the model's context length.
18-
* Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
19-
*/
2025
@JsonProperty("max_tokens")
2126
private Integer maxTokens;
2227

@@ -25,4 +30,38 @@ public class ChatRequest {
2530
@JsonProperty("top_p")
2631
private Double topP;
2732

33+
private String suffix;
34+
35+
private Integer n;
36+
37+
private Boolean stream;
38+
39+
private Integer logprobs;
40+
41+
private Boolean echo;
42+
43+
private String stop;
44+
45+
@JsonProperty("presence_penalty")
46+
private Double presencePenalty;
47+
48+
@JsonProperty("frequency_penalty")
49+
private Double frequencyPenalty;
50+
51+
@JsonProperty("best_of")
52+
private Integer bestOf;
53+
54+
@JsonProperty("logit_bias")
55+
private Map<String, Integer> logitBias;
56+
57+
private String user;
58+
59+
public ChatRequest(String model, String prompt, Integer maxTokens, Double temperature, Double topP) {
60+
this.model = model;
61+
this.prompt = prompt;
62+
this.maxTokens = maxTokens;
63+
this.temperature = temperature;
64+
this.topP = topP;
65+
}
66+
2867
}
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
package io.github.flashvayne.chatgpt.dto.chat;
2+
3+
import lombok.AllArgsConstructor;
4+
import lombok.Data;
5+
import lombok.NoArgsConstructor;
6+
7+
@Data
8+
@NoArgsConstructor
9+
@AllArgsConstructor
10+
public class MultiChatMessage {
11+
private String role;
12+
private String content;
13+
}
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
package io.github.flashvayne.chatgpt.dto.chat;
2+
3+
import com.fasterxml.jackson.annotation.JsonInclude;
4+
import com.fasterxml.jackson.annotation.JsonProperty;
5+
import lombok.AllArgsConstructor;
6+
import lombok.Data;
7+
import lombok.NoArgsConstructor;
8+
9+
import java.util.List;
10+
import java.util.Map;
11+
12+
/**
13+
* MultiChatRequest is used to construct request body.
14+
* For descriptions of all fields, please refer to <a href="https://platform.openai.com/docs/api-reference/chat/create">Create chat completion</a>
15+
*/
16+
@Data
17+
@NoArgsConstructor
18+
@AllArgsConstructor
19+
@JsonInclude(JsonInclude.Include.NON_NULL)
20+
public class MultiChatRequest {
21+
private String model;
22+
23+
private List<MultiChatMessage> messages;
24+
25+
/**
26+
* The maximum number of tokens to generate in the completion.
27+
* The token count of your prompt plus max_tokens cannot exceed the model's context length.
28+
* Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
29+
*/
30+
@JsonProperty("max_tokens")
31+
private Integer maxTokens;
32+
33+
private Double temperature;
34+
35+
@JsonProperty("top_p")
36+
private Double topP;
37+
38+
private Integer n;
39+
40+
private Boolean stream;
41+
42+
private String stop;
43+
44+
@JsonProperty("presence_penalty")
45+
private Double presencePenalty;
46+
47+
@JsonProperty("frequency_penalty")
48+
private Double frequencyPenalty;
49+
50+
@JsonProperty("logit_bias")
51+
private Map<String, Integer> logitBias;
52+
53+
private String user;
54+
55+
public MultiChatRequest(String model, List<MultiChatMessage> messages, Integer maxTokens, Double temperature, Double topP) {
56+
this.model = model;
57+
this.messages = messages;
58+
this.maxTokens = maxTokens;
59+
this.temperature = temperature;
60+
this.topP = topP;
61+
}
62+
63+
}
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
package io.github.flashvayne.chatgpt.dto.chat;
2+
3+
import io.github.flashvayne.chatgpt.dto.Usage;
4+
import lombok.AllArgsConstructor;
5+
import lombok.Data;
6+
import lombok.NoArgsConstructor;
7+
8+
import java.time.LocalDate;
9+
import java.util.List;
10+
11+
@Data
12+
@NoArgsConstructor
13+
@AllArgsConstructor
14+
public class MultiChatResponse {
15+
private String id;
16+
private String object;
17+
private LocalDate created;
18+
private String model;
19+
private List<MultiResponseChoice> choices;
20+
private Usage usage;
21+
}
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
package io.github.flashvayne.chatgpt.dto.chat;
2+
3+
import com.fasterxml.jackson.annotation.JsonProperty;
4+
import lombok.AllArgsConstructor;
5+
import lombok.Data;
6+
import lombok.NoArgsConstructor;
7+
8+
@Data
9+
@NoArgsConstructor
10+
@AllArgsConstructor
11+
public class MultiResponseChoice {
12+
private MultiChatMessage message;
13+
14+
@JsonProperty("finish_reason")
15+
private String finishReason;
16+
17+
private Integer index;
18+
}

src/main/java/io/github/flashvayne/chatgpt/property/ChatgptProperties.java

Lines changed: 25 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,16 +7,24 @@
77
@ConfigurationProperties(prefix = "chatgpt")
88
public class ChatgptProperties {
99

10-
//apiKey
1110
private String apiKey = "";
1211

12+
private String url = "https://api.openai.com/v1/completions";
13+
1314
private String model = "text-davinci-003";
1415

1516
private Integer maxTokens = 500;
1617

1718
private Double temperature = 0.0;
1819

1920
private Double topP = 1.0;
21+
22+
private MultiChatProperties multi;
23+
24+
public ChatgptProperties() {
25+
this.multi = new MultiChatProperties();
26+
}
27+
2028
public String getApiKey() {
2129
return apiKey;
2230
}
@@ -25,6 +33,14 @@ public void setApiKey(String apiKey) {
2533
this.apiKey = apiKey;
2634
}
2735

36+
public String getUrl() {
37+
return url;
38+
}
39+
40+
public void setUrl(String url) {
41+
this.url = url;
42+
}
43+
2844
public String getModel() {
2945
return model;
3046
}
@@ -56,4 +72,12 @@ public Double getTopP() {
5672
public void setTopP(Double topP) {
5773
this.topP = topP;
5874
}
75+
76+
public MultiChatProperties getMulti() {
77+
return multi;
78+
}
79+
80+
public void setMulti(MultiChatProperties multi) {
81+
this.multi = multi;
82+
}
5983
}
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
package io.github.flashvayne.chatgpt.property;
2+
3+
public class MultiChatProperties {
4+
5+
private String url = "https://api.openai.com/v1/chat/completions";
6+
7+
private String model = "gpt-3.5-turbo";
8+
9+
private Integer maxTokens = 500;
10+
11+
private Double temperature = 0.0;
12+
13+
private Double topP = 1.0;
14+
15+
public String getUrl() {
16+
return url;
17+
}
18+
19+
public void setUrl(String url) {
20+
this.url = url;
21+
}
22+
23+
public String getModel() {
24+
return model;
25+
}
26+
27+
public void setModel(String model) {
28+
this.model = model;
29+
}
30+
31+
public Integer getMaxTokens() {
32+
return maxTokens;
33+
}
34+
35+
public void setMaxTokens(Integer maxTokens) {
36+
this.maxTokens = maxTokens;
37+
}
38+
39+
public Double getTemperature() {
40+
return temperature;
41+
}
42+
43+
public void setTemperature(Double temperature) {
44+
this.temperature = temperature;
45+
}
46+
47+
public Double getTopP() {
48+
return topP;
49+
}
50+
51+
public void setTopP(Double topP) {
52+
this.topP = topP;
53+
}
54+
}

0 commit comments

Comments
 (0)