Module: RubyLLM::Providers::OpenAI::Streaming
- Included in:
- RubyLLM::Providers::OpenAI
- Defined in:
- lib/ruby_llm/providers/openai/streaming.rb
Overview
Streaming methods of the OpenAI API integration
Class Method Summary collapse
Class Method Details
.build_chunk(data) ⇒ Object
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
# File 'lib/ruby_llm/providers/openai/streaming.rb', line 14 def build_chunk(data) usage = data['usage'] || {} cached_tokens = usage.dig('prompt_tokens_details', 'cached_tokens') Chunk.new( role: :assistant, model_id: data['model'], content: data.dig('choices', 0, 'delta', 'content'), tool_calls: parse_tool_calls(data.dig('choices', 0, 'delta', 'tool_calls'), parse_arguments: false), input_tokens: usage['prompt_tokens'], output_tokens: usage['completion_tokens'], cached_tokens: cached_tokens, cache_creation_tokens: 0 ) end |
.parse_streaming_error(data) ⇒ Object
30 31 32 33 34 35 36 37 38 39 40 41 42 |
# File 'lib/ruby_llm/providers/openai/streaming.rb', line 30 def parse_streaming_error(data) error_data = JSON.parse(data) return unless error_data['error'] case error_data.dig('error', 'type') when 'server_error' [500, error_data['error']['message']] when 'rate_limit_exceeded', 'insufficient_quota' [429, error_data['error']['message']] else [400, error_data['error']['message']] end end |
.stream_url ⇒ Object
10 11 12 |
# File 'lib/ruby_llm/providers/openai/streaming.rb', line 10 def stream_url completion_url end |