scholarly360 commited on
Commit
4a622f3
1 Parent(s): 09fea05

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -12
README.md CHANGED
@@ -51,26 +51,20 @@ Just like any ChatGPT equivalent model (For Contracts Domain)
51
 
52
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
53
 
54
- [More Information Needed]
55
-
56
  ### Downstream Use [optional]
57
 
58
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
59
 
60
- [More Information Needed]
61
 
62
  ### Out-of-Scope Use
63
 
64
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
65
 
66
- [More Information Needed]
67
 
68
  ## Bias, Risks, and Limitations
69
 
70
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
71
 
72
- [More Information Needed]
73
-
74
  ### Recommendations
75
 
76
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
@@ -91,16 +85,13 @@ Use the code below to get started with the model.
91
  >>> inputs = tokenizer(prompt, return_tensors="pt")
92
  >>> outputs = model.generate(**inputs)
93
  >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
94
- >>> ### Example 1
95
  >>> prompt = """ what is agreement date in 'This COLLABORATION AGREEMENT (Agreement) dated November 14, 2002, is made by and between ZZZ, INC., a Delaware corporation' """"
96
  >>> inputs = tokenizer(prompt, return_tensors="pt")
97
  >>> outputs = model.generate(**inputs)
98
  >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
99
  >>> ### Example 3
100
- >>> prompt = """ ### Instruction:
101
- what is agreement date
102
- ### Input:
103
- This COLLABORATION AGREEMENT (Agreement) dated November 14, 2002, is made by and between ZZZ, INC., a Delaware corporation """"
104
  >>> inputs = tokenizer(prompt, return_tensors="pt")
105
  >>> outputs = model.generate(**inputs)
106
  >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
@@ -115,7 +106,6 @@ This COLLABORATION AGREEMENT (Agreement) dated November 14, 2002, is made by and
115
 
116
  <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
117
  DataSet will be released soon for the community
118
- [More Information Needed]
119
 
120
  ### Training Procedure
121
 
 
51
 
52
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
53
 
 
 
54
  ### Downstream Use [optional]
55
 
56
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
57
 
 
58
 
59
  ### Out-of-Scope Use
60
 
61
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
62
 
 
63
 
64
  ## Bias, Risks, and Limitations
65
 
66
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
67
 
 
 
68
  ### Recommendations
69
 
70
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
85
  >>> inputs = tokenizer(prompt, return_tensors="pt")
86
  >>> outputs = model.generate(**inputs)
87
  >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
88
+ >>> ### Example 2
89
  >>> prompt = """ what is agreement date in 'This COLLABORATION AGREEMENT (Agreement) dated November 14, 2002, is made by and between ZZZ, INC., a Delaware corporation' """"
90
  >>> inputs = tokenizer(prompt, return_tensors="pt")
91
  >>> outputs = model.generate(**inputs)
92
  >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
93
  >>> ### Example 3
94
+ >>> prompt = """ ### Instruction: \n\n what is agreement date ### Input: \n\n This COLLABORATION AGREEMENT (Agreement) dated November 14, 2002, is made by and between ZZZ, INC., a Delaware corporation """"
 
 
 
95
  >>> inputs = tokenizer(prompt, return_tensors="pt")
96
  >>> outputs = model.generate(**inputs)
97
  >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
 
106
 
107
  <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
108
  DataSet will be released soon for the community
 
109
 
110
  ### Training Procedure
111