Interface: TextGenerationStreamDetails
Properties
best _ of _ sequences
• Optional
best_of_sequences: TextGenerationStreamBestOfSequence
[]
Additional sequences when using the best_of
parameter
Defined in
inference/src/tasks/nlp/textGenerationStream.ts:66
finish _ reason
• finish_reason: TextGenerationStreamFinishReason
Generation finish reason
Defined in
inference/src/tasks/nlp/textGenerationStream.ts:56
generated _ tokens
• generated_tokens: number
Number of generated tokens
Defined in
inference/src/tasks/nlp/textGenerationStream.ts:58
prefill
• prefill: TextGenerationStreamPrefillToken
[]
Prompt tokens
Defined in
inference/src/tasks/nlp/textGenerationStream.ts:62
seed
• Optional
seed: number
Sampling seed if sampling was activated
Defined in
inference/src/tasks/nlp/textGenerationStream.ts:60
tokens
• tokens: TextGenerationStreamToken
[]
Defined in
inference/src/tasks/nlp/textGenerationStream.ts:64
< > Update on GitHub