Class: Polars::DataFrame
- Inherits:
-
Object
- Object
- Polars::DataFrame
- Includes:
- Plot
- Defined in:
- lib/polars/data_frame.rb
Overview
Two-dimensional data structure representing data as a table with rows and columns.
Instance Method Summary collapse
-
#!=(other) ⇒ DataFrame
Not equal.
-
#%(other) ⇒ DataFrame
Returns the modulo.
-
#*(other) ⇒ DataFrame
Performs multiplication.
-
#+(other) ⇒ DataFrame
Performs addition.
-
#-(other) ⇒ DataFrame
Performs subtraction.
-
#/(other) ⇒ DataFrame
Performs division.
-
#<(other) ⇒ DataFrame
Less than.
-
#<=(other) ⇒ DataFrame
Less than or equal.
-
#==(other) ⇒ DataFrame
Equal.
-
#>(other) ⇒ DataFrame
Greater than.
-
#>=(other) ⇒ DataFrame
Greater than or equal.
-
#[](*args) ⇒ Object
Returns subset of the DataFrame.
-
#[]=(*key, value) ⇒ Object
Set item.
-
#cast(dtypes, strict: true) ⇒ DataFrame
Cast DataFrame column(s) to the specified dtype(s).
-
#clear(n = 0) ⇒ DataFrame
(also: #cleared)
Create an empty copy of the current DataFrame.
-
#columns ⇒ Array
Get column names.
-
#columns=(columns) ⇒ Object
Change the column names of the DataFrame.
-
#delete(name) ⇒ Series
Drop in place if exists.
-
#describe ⇒ DataFrame
Summary statistics for a DataFrame.
-
#drop(*columns) ⇒ DataFrame
Remove column from DataFrame and return as new.
-
#drop_in_place(name) ⇒ Series
Drop in place.
-
#drop_nulls(subset: nil) ⇒ DataFrame
Return a new DataFrame where the null values are dropped.
-
#dtypes ⇒ Array
Get dtypes of columns in DataFrame.
-
#each(&block) ⇒ Object
Returns an enumerator.
-
#each_row(named: true, buffer_size: 500, &block) ⇒ Object
Returns an iterator over the DataFrame of rows of Ruby-native values.
-
#equals(other, null_equal: true) ⇒ Boolean
(also: #frame_equal)
Check if DataFrame is equal to other.
-
#estimated_size(unit = "b") ⇒ Numeric
Return an estimation of the total (heap) allocated size of the DataFrame.
-
#explode(columns) ⇒ DataFrame
Explode
DataFrame
to long format by exploding a column with Lists. -
#extend(other) ⇒ DataFrame
Extend the memory backed by this
DataFrame
with the values fromother
. -
#fill_nan(fill_value) ⇒ DataFrame
Fill floating point NaN values by an Expression evaluation.
-
#fill_null(value = nil, strategy: nil, limit: nil, matches_supertype: true) ⇒ DataFrame
Fill null values using the specified value or strategy.
-
#filter(predicate) ⇒ DataFrame
Filter the rows in the DataFrame based on a predicate expression.
-
#flags ⇒ Hash
Get flags that are set on the columns of this DataFrame.
-
#fold ⇒ Series
Apply a horizontal reduction on a DataFrame.
-
#gather_every(n, offset = 0) ⇒ DataFrame
(also: #take_every)
Take every nth row in the DataFrame and return as a new DataFrame.
-
#get_column(name) ⇒ Series
Get a single column as Series by name.
-
#get_column_index(name) ⇒ Series
(also: #find_idx_by_name)
Find the index of a column by name.
-
#get_columns ⇒ Array
Get the DataFrame as a Array of Series.
-
#group_by(by, maintain_order: false) ⇒ GroupBy
(also: #groupby, #group)
Start a group by operation.
-
#group_by_dynamic(index_column, every:, period: nil, offset: nil, truncate: true, include_boundaries: false, closed: "left", by: nil, start_by: "window") ⇒ DataFrame
(also: #groupby_dynamic)
Group based on a time value (or index value of type
:i32
,:i64
). -
#hash_rows(seed: 0, seed_1: nil, seed_2: nil, seed_3: nil) ⇒ Series
Hash and combine the rows in this DataFrame.
-
#head(n = 5) ⇒ DataFrame
Get the first
n
rows. -
#height ⇒ Integer
(also: #count, #length, #size)
Get the height of the DataFrame.
-
#hstack(columns, in_place: false) ⇒ DataFrame
Return a new DataFrame grown horizontally by stacking multiple Series to it.
-
#include?(name) ⇒ Boolean
Check if DataFrame includes column.
-
#initialize(data = nil, schema: nil, columns: nil, schema_overrides: nil, strict: true, orient: nil, infer_schema_length: 100, nan_to_null: false) ⇒ DataFrame
constructor
Create a new DataFrame.
-
#insert_column(index, series) ⇒ DataFrame
(also: #insert_at_idx)
Insert a Series at a certain column index.
-
#interpolate ⇒ DataFrame
Interpolate intermediate values.
-
#is_duplicated ⇒ Series
Get a mask of all duplicated rows in this DataFrame.
-
#is_empty ⇒ Boolean
(also: #empty?)
Check if the dataframe is empty.
-
#is_unique ⇒ Series
Get a mask of all unique rows in this DataFrame.
-
#item ⇒ Object
Return the dataframe as a scalar.
-
#iter_rows(named: false, buffer_size: 500, &block) ⇒ Object
Returns an iterator over the DataFrame of rows of Ruby-native values.
-
#join(other, left_on: nil, right_on: nil, on: nil, how: "inner", suffix: "_right", validate: "m:m", join_nulls: false, coalesce: nil) ⇒ DataFrame
Join in SQL-like fashion.
-
#join_asof(other, left_on: nil, right_on: nil, on: nil, by_left: nil, by_right: nil, by: nil, strategy: "backward", suffix: "_right", tolerance: nil, allow_parallel: true, force_parallel: false, coalesce: true) ⇒ DataFrame
Perform an asof join.
-
#lazy ⇒ LazyFrame
Start a lazy query from this point.
-
#limit(n = 5) ⇒ DataFrame
Get the first
n
rows. -
#map_rows(return_dtype: nil, inference_size: 256, &f) ⇒ Object
(also: #apply)
Apply a custom/user-defined function (UDF) over the rows of the DataFrame.
-
#max ⇒ DataFrame
Aggregate the columns of this DataFrame to their maximum value.
-
#max_horizontal ⇒ Series
Get the maximum value horizontally across columns.
-
#mean ⇒ DataFrame
Aggregate the columns of this DataFrame to their mean value.
-
#mean_horizontal(ignore_nulls: true) ⇒ Series
Take the mean of all values horizontally across columns.
-
#median ⇒ DataFrame
Aggregate the columns of this DataFrame to their median value.
-
#merge_sorted(other, key) ⇒ DataFrame
Take two sorted DataFrames and merge them by the sorted key.
-
#min ⇒ DataFrame
Aggregate the columns of this DataFrame to their minimum value.
-
#min_horizontal ⇒ Series
Get the minimum value horizontally across columns.
-
#n_chunks(strategy: "first") ⇒ Object
Get number of chunks used by the ChunkedArrays of this DataFrame.
-
#n_unique(subset: nil) ⇒ DataFrame
Return the number of unique rows, or the number of unique row-subsets.
-
#null_count ⇒ DataFrame
Create a new DataFrame that shows the null counts per column.
-
#partition_by(groups, maintain_order: true, include_key: true, as_dict: false) ⇒ Object
Split into multiple DataFrames partitioned by groups.
-
#pipe(func, *args, **kwargs, &block) ⇒ Object
Offers a structured way to apply a sequence of user-defined functions (UDFs).
-
#pivot(on, index: nil, values: nil, aggregate_function: nil, maintain_order: true, sort_columns: false, separator: "_") ⇒ DataFrame
Create a spreadsheet-style pivot table as a DataFrame.
-
#plot(x = nil, y = nil, type: nil, group: nil, stacked: nil) ⇒ Vega::LiteChart
included
from Plot
Plot data.
-
#product ⇒ DataFrame
Aggregate the columns of this DataFrame to their product values.
-
#quantile(quantile, interpolation: "nearest") ⇒ DataFrame
Aggregate the columns of this DataFrame to their quantile value.
-
#rechunk ⇒ DataFrame
This will make sure all subsequent operations have optimal and predictable performance.
-
#rename(mapping, strict: true) ⇒ DataFrame
Rename column names.
-
#replace(column, new_col) ⇒ DataFrame
Replace a column by a new Series.
-
#replace_column(index, series) ⇒ DataFrame
(also: #replace_at_idx)
Replace a column at an index location.
-
#reverse ⇒ DataFrame
Reverse the DataFrame.
-
#rolling(index_column:, period:, offset: nil, closed: "right", by: nil) ⇒ RollingGroupBy
(also: #groupby_rolling, #group_by_rolling)
Create rolling groups based on a time column.
-
#row(index = nil, by_predicate: nil, named: false) ⇒ Object
Get a row as tuple, either by index or by predicate.
-
#rows(named: false) ⇒ Array
Convert columnar data to rows as Ruby arrays.
-
#sample(n: nil, frac: nil, with_replacement: false, shuffle: false, seed: nil) ⇒ DataFrame
Sample from this DataFrame.
-
#schema ⇒ Hash
Get the schema.
-
#select(*exprs, **named_exprs) ⇒ DataFrame
Select columns from this DataFrame.
-
#set_sorted(column, descending: false) ⇒ DataFrame
Indicate that one or multiple columns are sorted.
-
#shape ⇒ Array
Get the shape of the DataFrame.
-
#shift(n, fill_value: nil) ⇒ DataFrame
Shift values by the given period.
-
#shift_and_fill(periods, fill_value) ⇒ DataFrame
Shift the values by a given period and fill the resulting null values.
-
#shrink_to_fit(in_place: false) ⇒ DataFrame
Shrink DataFrame memory usage.
-
#slice(offset, length = nil) ⇒ DataFrame
Get a slice of this DataFrame.
-
#sort(by, reverse: false, nulls_last: false) ⇒ DataFrame
Sort the DataFrame by column.
-
#sort!(by, reverse: false, nulls_last: false) ⇒ DataFrame
Sort the DataFrame by column in-place.
-
#std(ddof: 1) ⇒ DataFrame
Aggregate the columns of this DataFrame to their standard deviation value.
-
#sum ⇒ DataFrame
Aggregate the columns of this DataFrame to their sum value.
-
#sum_horizontal(ignore_nulls: true) ⇒ Series
Sum all values horizontally across columns.
-
#tail(n = 5) ⇒ DataFrame
Get the last
n
rows. -
#to_a ⇒ Array
Returns an array representing the DataFrame.
-
#to_csv(**options) ⇒ String
Write to comma-separated values (CSV) string.
-
#to_dummies(columns: nil, separator: "_", drop_first: false) ⇒ DataFrame
Get one hot encoded dummy variables.
-
#to_h(as_series: true) ⇒ Hash
Convert DataFrame to a hash mapping column name to values.
-
#to_hashes ⇒ Array
Convert every row to a dictionary.
-
#to_numo ⇒ Numo::NArray
Convert DataFrame to a 2D Numo array.
-
#to_s ⇒ String
(also: #inspect)
Returns a string representing the DataFrame.
-
#to_series(index = 0) ⇒ Series
Select column as Series at index location.
-
#to_struct(name) ⇒ Series
Convert a
DataFrame
to aSeries
of typeStruct
. -
#transpose(include_header: false, header_name: "column", column_names: nil) ⇒ DataFrame
Transpose a DataFrame over the diagonal.
-
#unique(maintain_order: true, subset: nil, keep: "first") ⇒ DataFrame
Drop duplicate rows from this DataFrame.
-
#unnest(names) ⇒ DataFrame
Decompose a struct into its fields.
-
#unpivot(on, index: nil, variable_name: nil, value_name: nil) ⇒ DataFrame
(also: #melt)
Unpivot a DataFrame from wide to long format.
-
#unstack(step:, how: "vertical", columns: nil, fill_values: nil) ⇒ DataFrame
Unstack a long table to a wide form without doing an aggregation.
-
#upsample(time_column:, every:, by: nil, maintain_order: false) ⇒ DataFrame
Upsample a DataFrame at a regular frequency.
-
#var(ddof: 1) ⇒ DataFrame
Aggregate the columns of this DataFrame to their variance value.
-
#vstack(df, in_place: false) ⇒ DataFrame
Grow this DataFrame vertically by stacking a DataFrame to it.
-
#width ⇒ Integer
Get the width of the DataFrame.
-
#with_column(column) ⇒ DataFrame
Return a new DataFrame with the column added or replaced.
-
#with_columns(*exprs, **named_exprs) ⇒ DataFrame
Add columns to this DataFrame.
-
#with_row_index(name: "index", offset: 0) ⇒ DataFrame
(also: #with_row_count)
Add a column at index 0 that counts the rows.
-
#write_avro(file, compression = "uncompressed", name: "") ⇒ nil
Write to Apache Avro file.
-
#write_csv(file = nil, has_header: true, include_header: nil, sep: ",", quote: '"', batch_size: 1024, datetime_format: nil, date_format: nil, time_format: nil, float_precision: nil, null_value: nil) ⇒ String?
Write to comma-separated values (CSV) file.
-
#write_delta(target, mode: "error", storage_options: nil, delta_write_options: nil, delta_merge_options: nil) ⇒ nil
Write DataFrame as delta table.
-
#write_ipc(file, compression: "uncompressed", compat_level: nil, storage_options: nil, retries: 2) ⇒ nil
Write to Arrow IPC binary stream or Feather file.
-
#write_ipc_stream(file, compression: "uncompressed", compat_level: nil) ⇒ Object
Write to Arrow IPC record batch stream.
-
#write_json(file = nil, pretty: false, row_oriented: false) ⇒ nil
Serialize to JSON representation.
-
#write_ndjson(file = nil) ⇒ nil
Serialize to newline delimited JSON representation.
-
#write_parquet(file, compression: "zstd", compression_level: nil, statistics: false, row_group_size: nil, data_page_size: nil) ⇒ nil
Write to Apache Parquet file.
Constructor Details
#initialize(data = nil, schema: nil, columns: nil, schema_overrides: nil, strict: true, orient: nil, infer_schema_length: 100, nan_to_null: false) ⇒ DataFrame
Create a new DataFrame.
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
# File 'lib/polars/data_frame.rb', line 50 def initialize(data = nil, schema: nil, columns: nil, schema_overrides: nil, strict: true, orient: nil, infer_schema_length: 100, nan_to_null: false) if schema && columns warn "columns is ignored when schema is passed" end schema ||= columns if defined?(ActiveRecord) && (data.is_a?(ActiveRecord::Relation) || data.is_a?(ActiveRecord::Result)) raise ArgumentError, "Use read_database instead" end if data.nil? self._df = self.class.hash_to_rbdf({}, schema: schema, schema_overrides: schema_overrides) elsif data.is_a?(Hash) data = data.transform_keys { |v| v.is_a?(Symbol) ? v.to_s : v } self._df = self.class.hash_to_rbdf(data, schema: schema, schema_overrides: schema_overrides, strict: strict, nan_to_null: nan_to_null) elsif data.is_a?(::Array) self._df = self.class.sequence_to_rbdf(data, schema: schema, schema_overrides: schema_overrides, strict: strict, orient: orient, infer_schema_length: infer_schema_length) elsif data.is_a?(Series) self._df = self.class.series_to_rbdf(data, schema: schema, schema_overrides: schema_overrides, strict: strict) elsif data.respond_to?(:arrow_c_stream) # This uses the fact that RbSeries.from_arrow_c_stream will create a # struct-typed Series. Then we unpack that to a DataFrame. tmp_col_name = "" s = Utils.wrap_s(RbSeries.from_arrow_c_stream(data)) self._df = s.to_frame(tmp_col_name).unnest(tmp_col_name)._df else raise ArgumentError, "DataFrame constructor called with unsupported type; got #{data.class.name}" end end |
Instance Method Details
#!=(other) ⇒ DataFrame
Not equal.
230 231 232 |
# File 'lib/polars/data_frame.rb', line 230 def !=(other) _comp(other, "neq") end |
#%(other) ⇒ DataFrame
Returns the modulo.
313 314 315 316 317 318 319 320 |
# File 'lib/polars/data_frame.rb', line 313 def %(other) if other.is_a?(DataFrame) return _from_rbdf(_df.rem_df(other._df)) end other = _prepare_other_arg(other) _from_rbdf(_df.rem(other._s)) end |
#*(other) ⇒ DataFrame
Performs multiplication.
265 266 267 268 269 270 271 272 |
# File 'lib/polars/data_frame.rb', line 265 def *(other) if other.is_a?(DataFrame) return _from_rbdf(_df.mul_df(other._df)) end other = _prepare_other_arg(other) _from_rbdf(_df.mul(other._s)) end |
#+(other) ⇒ DataFrame
Performs addition.
289 290 291 292 293 294 295 296 |
# File 'lib/polars/data_frame.rb', line 289 def +(other) if other.is_a?(DataFrame) return _from_rbdf(_df.add_df(other._df)) end other = _prepare_other_arg(other) _from_rbdf(_df.add(other._s)) end |
#-(other) ⇒ DataFrame
Performs subtraction.
301 302 303 304 305 306 307 308 |
# File 'lib/polars/data_frame.rb', line 301 def -(other) if other.is_a?(DataFrame) return _from_rbdf(_df.sub_df(other._df)) end other = _prepare_other_arg(other) _from_rbdf(_df.sub(other._s)) end |
#/(other) ⇒ DataFrame
Performs division.
277 278 279 280 281 282 283 284 |
# File 'lib/polars/data_frame.rb', line 277 def /(other) if other.is_a?(DataFrame) return _from_rbdf(_df.div_df(other._df)) end other = _prepare_other_arg(other) _from_rbdf(_df.div(other._s)) end |
#<(other) ⇒ DataFrame
Less than.
244 245 246 |
# File 'lib/polars/data_frame.rb', line 244 def <(other) _comp(other, "lt") end |
#<=(other) ⇒ DataFrame
Less than or equal.
258 259 260 |
# File 'lib/polars/data_frame.rb', line 258 def <=(other) _comp(other, "lt_eq") end |
#==(other) ⇒ DataFrame
Equal.
223 224 225 |
# File 'lib/polars/data_frame.rb', line 223 def ==(other) _comp(other, "eq") end |
#>(other) ⇒ DataFrame
Greater than.
237 238 239 |
# File 'lib/polars/data_frame.rb', line 237 def >(other) _comp(other, "gt") end |
#>=(other) ⇒ DataFrame
Greater than or equal.
251 252 253 |
# File 'lib/polars/data_frame.rb', line 251 def >=(other) _comp(other, "gt_eq") end |
#[](*args) ⇒ Object
Returns subset of the DataFrame.
354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 |
# File 'lib/polars/data_frame.rb', line 354 def [](*args) if args.size == 2 row_selection, col_selection = args # df[.., unknown] if row_selection.is_a?(Range) # multiple slices # df[.., ..] if col_selection.is_a?(Range) raise Todo end end # df[2, ..] (select row as df) if row_selection.is_a?(Integer) if col_selection.is_a?(::Array) df = self[0.., col_selection] return df.slice(row_selection, 1) end # df[2, "a"] if col_selection.is_a?(::String) || col_selection.is_a?(Symbol) return self[col_selection][row_selection] end end # column selection can be "a" and ["a", "b"] if col_selection.is_a?(::String) || col_selection.is_a?(Symbol) col_selection = [col_selection] end # df[.., 1] if col_selection.is_a?(Integer) series = to_series(col_selection) return series[row_selection] end if col_selection.is_a?(::Array) # df[.., [1, 2]] if Utils.is_int_sequence(col_selection) series_list = col_selection.map { |i| to_series(i) } df = self.class.new(series_list) return df[row_selection] end end df = self[col_selection] return df[row_selection] elsif args.size == 1 item = args[0] # select single column # df["foo"] if item.is_a?(::String) || item.is_a?(Symbol) return Utils.wrap_s(_df.get_column(item.to_s)) end # df[idx] if item.is_a?(Integer) return slice(_pos_idx(item, 0), 1) end # df[..] if item.is_a?(Range) return Slice.new(self).apply(item) end if item.is_a?(::Array) && item.all? { |v| Utils.strlike?(v) } # select multiple columns # df[["foo", "bar"]] return _from_rbdf(_df.select(item.map(&:to_s))) end if Utils.is_int_sequence(item) item = Series.new("", item) end if item.is_a?(Series) dtype = item.dtype if dtype == String return _from_rbdf(_df.select(item)) elsif dtype == UInt32 return _from_rbdf(_df.take_with_series(item._s)) elsif [UInt8, UInt16, UInt64, Int8, Int16, Int32, Int64].include?(dtype) return _from_rbdf( _df.take_with_series(_pos_idxs(item, 0)._s) ) end end end # Ruby-specific if item.is_a?(Expr) || item.is_a?(Series) return filter(item) end raise ArgumentError, "Cannot get item of type: #{item.class.name}" end |
#[]=(*key, value) ⇒ Object
Set item.
456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 |
# File 'lib/polars/data_frame.rb', line 456 def []=(*key, value) if key.length == 1 key = key.first elsif key.length != 2 raise ArgumentError, "wrong number of arguments (given #{key.length + 1}, expected 2..3)" end if Utils.strlike?(key) if value.is_a?(::Array) || (defined?(Numo::NArray) && value.is_a?(Numo::NArray)) value = Series.new(value) elsif !value.is_a?(Series) value = Polars.lit(value) end self._df = with_column(value.alias(key.to_s))._df elsif key.is_a?(::Array) row_selection, col_selection = key if Utils.strlike?(col_selection) s = self[col_selection] elsif col_selection.is_a?(Integer) raise Todo else raise ArgumentError, "column selection not understood: #{col_selection}" end s[row_selection] = value if col_selection.is_a?(Integer) replace_column(col_selection, s) elsif Utils.strlike?(col_selection) replace(col_selection, s) end else raise Todo end end |
#cast(dtypes, strict: true) ⇒ DataFrame
Cast DataFrame column(s) to the specified dtype(s).
2951 2952 2953 |
# File 'lib/polars/data_frame.rb', line 2951 def cast(dtypes, strict: true) lazy.cast(dtypes, strict: strict).collect(_eager: true) end |
#clear(n = 0) ⇒ DataFrame Also known as: cleared
Create an empty copy of the current DataFrame.
Returns a DataFrame with identical schema but no data.
2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 |
# File 'lib/polars/data_frame.rb', line 2991 def clear(n = 0) if n == 0 _from_rbdf(_df.clear) elsif n > 0 || len > 0 self.class.new( schema.to_h { |nm, tp| [nm, Series.new(nm, [], dtype: tp).extend_constant(nil, n)] } ) else clone end end |
#columns ⇒ Array
Get column names.
140 141 142 |
# File 'lib/polars/data_frame.rb', line 140 def columns _df.columns end |
#columns=(columns) ⇒ Object
Change the column names of the DataFrame.
173 174 175 |
# File 'lib/polars/data_frame.rb', line 173 def columns=(columns) _df.set_column_names(columns) end |
#delete(name) ⇒ Series
Drop in place if exists.
2898 2899 2900 |
# File 'lib/polars/data_frame.rb', line 2898 def delete(name) drop_in_place(name) if include?(name) end |
#describe ⇒ DataFrame
Summary statistics for a DataFrame.
1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 |
# File 'lib/polars/data_frame.rb', line 1321 def describe describe_cast = lambda do |stat| columns = [] self.columns.each_with_index do |s, i| if self[s].is_numeric || self[s].is_boolean columns << stat[0.., i].cast(:f64) else # for dates, strings, etc, we cast to string so that all # statistics can be shown columns << stat[0.., i].cast(:str) end end self.class.new(columns) end summary = _from_rbdf( Polars.concat( [ describe_cast.( self.class.new(columns.to_h { |c| [c, [height]] }) ), describe_cast.(null_count), describe_cast.(mean), describe_cast.(std), describe_cast.(min), describe_cast.(max), describe_cast.(median) ] )._df ) summary.insert_column( 0, Polars::Series.new( "describe", ["count", "null_count", "mean", "std", "min", "max", "median"], ) ) summary end |
#drop(*columns) ⇒ DataFrame
Remove column from DataFrame and return as new.
2838 2839 2840 |
# File 'lib/polars/data_frame.rb', line 2838 def drop(*columns) lazy.drop(*columns).collect(_eager: true) end |
#drop_in_place(name) ⇒ Series
Drop in place.
2866 2867 2868 |
# File 'lib/polars/data_frame.rb', line 2866 def drop_in_place(name) Utils.wrap_s(_df.drop_in_place(name)) end |
#drop_nulls(subset: nil) ⇒ DataFrame
Return a new DataFrame where the null values are dropped.
1702 1703 1704 |
# File 'lib/polars/data_frame.rb', line 1702 def drop_nulls(subset: nil) lazy.drop_nulls(subset: subset).collect(_eager: true) end |
#dtypes ⇒ Array
Get dtypes of columns in DataFrame. Dtypes can also be found in column headers when printing the DataFrame.
191 192 193 |
# File 'lib/polars/data_frame.rb', line 191 def dtypes _df.dtypes end |
#each(&block) ⇒ Object
Returns an enumerator.
347 348 349 |
# File 'lib/polars/data_frame.rb', line 347 def each(&block) get_columns.each(&block) end |
#each_row(named: true, buffer_size: 500, &block) ⇒ Object
Returns an iterator over the DataFrame of rows of Ruby-native values.
4864 4865 4866 |
# File 'lib/polars/data_frame.rb', line 4864 def each_row(named: true, buffer_size: 500, &block) iter_rows(named: named, buffer_size: buffer_size, &block) end |
#equals(other, null_equal: true) ⇒ Boolean Also known as: frame_equal
Check if DataFrame is equal to other.
1514 1515 1516 |
# File 'lib/polars/data_frame.rb', line 1514 def equals(other, null_equal: true) _df.equals(other._df, null_equal) end |
#estimated_size(unit = "b") ⇒ Numeric
Return an estimation of the total (heap) allocated size of the DataFrame.
Estimated size is given in the specified unit (bytes by default).
This estimation is the sum of the size of its buffers, validity, including nested arrays. Multiple arrays may share buffers and bitmaps. Therefore, the size of 2 arrays is not the sum of the sizes computed from this function. In particular, StructArray's size is an upper bound.
When an array is sliced, its allocated size remains constant because the buffer unchanged. However, this function will yield a smaller number. This is because this function returns the visible size of the buffer, not its total capacity.
FFI buffers are included in this estimation.
1064 1065 1066 1067 |
# File 'lib/polars/data_frame.rb', line 1064 def estimated_size(unit = "b") sz = _df.estimated_size Utils.scale_bytes(sz, to: unit) end |
#explode(columns) ⇒ DataFrame
Explode DataFrame
to long format by exploding a column with Lists.
3240 3241 3242 |
# File 'lib/polars/data_frame.rb', line 3240 def explode(columns) lazy.explode(columns).collect(no_optimization: true) end |
#extend(other) ⇒ DataFrame
Extend the memory backed by this DataFrame
with the values from other
.
Different from vstack
which adds the chunks from other
to the chunks of this
DataFrame
extend
appends the data from other
to the underlying memory
locations and thus may cause a reallocation.
If this does not cause a reallocation, the resulting data structure will not have any extra chunks and thus will yield faster queries.
Prefer extend
over vstack
when you want to do a query after a single append.
For instance during online operations where you add n
rows and rerun a query.
Prefer vstack
over extend
when you want to append many times before doing a
query. For instance when you read in multiple files and when to store them in a
single DataFrame
. In the latter case, finish the sequence of vstack
operations with a rechunk
.
2778 2779 2780 2781 |
# File 'lib/polars/data_frame.rb', line 2778 def extend(other) _df.extend(other._df) self end |
#fill_nan(fill_value) ⇒ DataFrame
Note that floating point NaNs (Not a Number) are not missing values!
To replace missing values, use fill_null
.
Fill floating point NaN values by an Expression evaluation.
3205 3206 3207 |
# File 'lib/polars/data_frame.rb', line 3205 def fill_nan(fill_value) lazy.fill_nan(fill_value).collect(no_optimization: true) end |
#fill_null(value = nil, strategy: nil, limit: nil, matches_supertype: true) ⇒ DataFrame
Fill null values using the specified value or strategy.
3165 3166 3167 3168 3169 3170 3171 3172 |
# File 'lib/polars/data_frame.rb', line 3165 def fill_null(value = nil, strategy: nil, limit: nil, matches_supertype: true) _from_rbdf( lazy .fill_null(value, strategy: strategy, limit: limit, matches_supertype: matches_supertype) .collect(no_optimization: true) ._df ) end |
#filter(predicate) ⇒ DataFrame
Filter the rows in the DataFrame based on a predicate expression.
1287 1288 1289 |
# File 'lib/polars/data_frame.rb', line 1287 def filter(predicate) lazy.filter(predicate).collect end |
#flags ⇒ Hash
Get flags that are set on the columns of this DataFrame.
198 199 200 |
# File 'lib/polars/data_frame.rb', line 198 def flags columns.to_h { |name| [name, self[name].flags] } end |
#fold ⇒ Series
Apply a horizontal reduction on a DataFrame.
This can be used to effectively determine aggregations on a row level, and can be applied to any DataType that can be supercasted (casted to a similar parent type).
An example of the supercast rules when applying an arithmetic operation on two DataTypes are for instance:
i8 + str = str f32 + i64 = f32 f32 + f64 = f64
4673 4674 4675 4676 4677 4678 4679 4680 |
# File 'lib/polars/data_frame.rb', line 4673 def fold acc = to_series(0) 1.upto(width - 1) do |i| acc = yield(acc, to_series(i)) end acc end |
#gather_every(n, offset = 0) ⇒ DataFrame Also known as: take_every
Take every nth row in the DataFrame and return as a new DataFrame.
4901 4902 4903 |
# File 'lib/polars/data_frame.rb', line 4901 def gather_every(n, offset = 0) select(F.col("*").gather_every(n, offset)) end |
#get_column(name) ⇒ Series
Get a single column as Series by name.
3082 3083 3084 |
# File 'lib/polars/data_frame.rb', line 3082 def get_column(name) self[name] end |
#get_column_index(name) ⇒ Series Also known as: find_idx_by_name
Find the index of a column by name.
1374 1375 1376 |
# File 'lib/polars/data_frame.rb', line 1374 def get_column_index(name) _df.get_column_index(name) end |
#get_columns ⇒ Array
Get the DataFrame as a Array of Series.
3060 3061 3062 |
# File 'lib/polars/data_frame.rb', line 3060 def get_columns _df.get_columns.map { |s| Utils.wrap_s(s) } end |
#group_by(by, maintain_order: false) ⇒ GroupBy Also known as: groupby, group
Start a group by operation.
1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 |
# File 'lib/polars/data_frame.rb', line 1810 def group_by(by, maintain_order: false) if !Utils.bool?(maintain_order) raise TypeError, "invalid input for group_by arg `maintain_order`: #{maintain_order}." end GroupBy.new( self, by, maintain_order: maintain_order ) end |
#group_by_dynamic(index_column, every:, period: nil, offset: nil, truncate: true, include_boundaries: false, closed: "left", by: nil, start_by: "window") ⇒ DataFrame Also known as: groupby_dynamic
Group based on a time value (or index value of type :i32
, :i64
).
Time windows are calculated and rows are assigned to windows. Different from a normal group by is that a row can be member of multiple groups. The time/index window could be seen as a rolling window, with a window size determined by dates/times/values instead of slots in the DataFrame.
A window is defined by:
- every: interval of the window
- period: length of the window
- offset: offset of the window
The every
, period
and offset
arguments are created with
the following string language:
- 1ns (1 nanosecond)
- 1us (1 microsecond)
- 1ms (1 millisecond)
- 1s (1 second)
- 1m (1 minute)
- 1h (1 hour)
- 1d (1 day)
- 1w (1 week)
- 1mo (1 calendar month)
- 1y (1 calendar year)
- 1i (1 index count)
Or combine them: "3d12h4m25s" # 3 days, 12 hours, 4 minutes, and 25 seconds
In case of a group_by_dynamic on an integer column, the windows are defined by:
- "1i" # length 1
- "10i" # length 10
2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 |
# File 'lib/polars/data_frame.rb', line 2150 def group_by_dynamic( index_column, every:, period: nil, offset: nil, truncate: true, include_boundaries: false, closed: "left", by: nil, start_by: "window" ) DynamicGroupBy.new( self, index_column, every, period, offset, truncate, include_boundaries, closed, by, start_by ) end |
#hash_rows(seed: 0, seed_1: nil, seed_2: nil, seed_3: nil) ⇒ Series
Hash and combine the rows in this DataFrame.
The hash value is of type :u64
.
4938 4939 4940 4941 4942 4943 4944 |
# File 'lib/polars/data_frame.rb', line 4938 def hash_rows(seed: 0, seed_1: nil, seed_2: nil, seed_3: nil) k0 = seed k1 = seed_1.nil? ? seed : seed_1 k2 = seed_2.nil? ? seed : seed_2 k3 = seed_3.nil? ? seed : seed_3 Utils.wrap_s(_df.hash_rows(k0, k1, k2, k3)) end |
#head(n = 5) ⇒ DataFrame
Get the first n
rows.
1641 1642 1643 |
# File 'lib/polars/data_frame.rb', line 1641 def head(n = 5) _from_rbdf(_df.head(n)) end |
#height ⇒ Integer Also known as: count, length, size
Get the height of the DataFrame.
107 108 109 |
# File 'lib/polars/data_frame.rb', line 107 def height _df.height end |
#hstack(columns, in_place: false) ⇒ DataFrame
Return a new DataFrame grown horizontally by stacking multiple Series to it.
2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 |
# File 'lib/polars/data_frame.rb', line 2680 def hstack(columns, in_place: false) if !columns.is_a?(::Array) columns = columns.get_columns end if in_place _df.hstack_mut(columns.map(&:_s)) self else _from_rbdf(_df.hstack(columns.map(&:_s))) end end |
#include?(name) ⇒ Boolean
Check if DataFrame includes column.
340 341 342 |
# File 'lib/polars/data_frame.rb', line 340 def include?(name) columns.include?(name) end |
#insert_column(index, series) ⇒ DataFrame Also known as: insert_at_idx
Insert a Series at a certain column index. This operation is in place.
1240 1241 1242 1243 1244 1245 1246 |
# File 'lib/polars/data_frame.rb', line 1240 def insert_column(index, series) if index < 0 index = columns.length + index end _df.insert_column(index, series._s) self end |
#interpolate ⇒ DataFrame
Interpolate intermediate values. The interpolation method is linear.
4971 4972 4973 |
# File 'lib/polars/data_frame.rb', line 4971 def interpolate select(F.col("*").interpolate) end |
#is_duplicated ⇒ Series
Get a mask of all duplicated rows in this DataFrame.
3717 3718 3719 |
# File 'lib/polars/data_frame.rb', line 3717 def is_duplicated Utils.wrap_s(_df.is_duplicated) end |
#is_empty ⇒ Boolean Also known as: empty?
Check if the dataframe is empty.
4985 4986 4987 |
# File 'lib/polars/data_frame.rb', line 4985 def is_empty height == 0 end |
#is_unique ⇒ Series
Get a mask of all unique rows in this DataFrame.
3742 3743 3744 |
# File 'lib/polars/data_frame.rb', line 3742 def is_unique Utils.wrap_s(_df.is_unique) end |
#item ⇒ Object
Return the dataframe as a scalar.
Equivalent to df[0,0]
, with a check that the shape is (1,1).
509 510 511 512 513 514 |
# File 'lib/polars/data_frame.rb', line 509 def item if shape != [1, 1] raise ArgumentError, "Can only call .item if the dataframe is of shape (1,1), dataframe is of shape #{shape}" end self[0, 0] end |
#iter_rows(named: false, buffer_size: 500, &block) ⇒ Object
Returns an iterator over the DataFrame of rows of Ruby-native values.
4817 4818 4819 4820 4821 4822 4823 4824 4825 4826 4827 4828 4829 4830 4831 4832 4833 4834 4835 4836 4837 4838 4839 4840 4841 4842 4843 4844 4845 4846 4847 4848 |
# File 'lib/polars/data_frame.rb', line 4817 def iter_rows(named: false, buffer_size: 500, &block) return to_enum(:iter_rows, named: named, buffer_size: buffer_size) unless block_given? # load into the local namespace for a modest performance boost in the hot loops columns = self.columns # note: buffering rows results in a 2-4x speedup over individual calls # to ".row(i)", so it should only be disabled in extremely specific cases. if buffer_size offset = 0 while offset < height zerocopy_slice = slice(offset, buffer_size) rows_chunk = zerocopy_slice.rows(named: false) if named rows_chunk.each do |row| yield columns.zip(row).to_h end else rows_chunk.each(&block) end offset += buffer_size end elsif named height.times do |i| yield columns.zip(row(i)).to_h end else height.times do |i| yield row(i) end end end |
#join(other, left_on: nil, right_on: nil, on: nil, how: "inner", suffix: "_right", validate: "m:m", join_nulls: false, coalesce: nil) ⇒ DataFrame
Join in SQL-like fashion.
2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 |
# File 'lib/polars/data_frame.rb', line 2509 def join(other, left_on: nil, right_on: nil, on: nil, how: "inner", suffix: "_right", validate: "m:m", join_nulls: false, coalesce: nil ) lazy .join( other.lazy, left_on: left_on, right_on: right_on, on: on, how: how, suffix: suffix, validate: validate, join_nulls: join_nulls, coalesce: coalesce ) .collect(no_optimization: true) end |
#join_asof(other, left_on: nil, right_on: nil, on: nil, by_left: nil, by_right: nil, by: nil, strategy: "backward", suffix: "_right", tolerance: nil, allow_parallel: true, force_parallel: false, coalesce: true) ⇒ DataFrame
Perform an asof join.
This is similar to a left-join except that we match on nearest key rather than equal keys.
Both DataFrames must be sorted by the asof_join key.
For each row in the left DataFrame:
- A "backward" search selects the last row in the right DataFrame whose 'on' key is less than or equal to the left's key.
- A "forward" search selects the first row in the right DataFrame whose 'on' key is greater than or equal to the left's key.
The default is "backward".
2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 |
# File 'lib/polars/data_frame.rb', line 2365 def join_asof( other, left_on: nil, right_on: nil, on: nil, by_left: nil, by_right: nil, by: nil, strategy: "backward", suffix: "_right", tolerance: nil, allow_parallel: true, force_parallel: false, coalesce: true ) lazy .join_asof( other.lazy, left_on: left_on, right_on: right_on, on: on, by_left: by_left, by_right: by_right, by: by, strategy: strategy, suffix: suffix, tolerance: tolerance, allow_parallel: allow_parallel, force_parallel: force_parallel, coalesce: coalesce ) .collect(no_optimization: true) end |
#lazy ⇒ LazyFrame
Start a lazy query from this point.
3749 3750 3751 |
# File 'lib/polars/data_frame.rb', line 3749 def lazy wrap_ldf(_df.lazy) end |
#limit(n = 5) ⇒ DataFrame
Get the first n
rows.
Alias for #head.
1610 1611 1612 |
# File 'lib/polars/data_frame.rb', line 1610 def limit(n = 5) head(n) end |
#map_rows(return_dtype: nil, inference_size: 256, &f) ⇒ Object Also known as: apply
The frame-level apply
cannot track column names (as the UDF is a black-box
that may arbitrarily drop, rearrange, transform, or add new columns); if you
want to apply a UDF such that column names are preserved, you should use the
expression-level apply
syntax instead.
Apply a custom/user-defined function (UDF) over the rows of the DataFrame.
The UDF will receive each row as a tuple of values: udf(row)
.
Implementing logic using a Ruby function is almost always significantly slower and more memory intensive than implementing the same logic using the native expression API because:
- The native expression engine runs in Rust; UDFs run in Ruby.
- Use of Ruby UDFs forces the DataFrame to be materialized in memory.
- Polars-native expressions can be parallelised (UDFs cannot).
- Polars-native expressions can be logically optimised (UDFs cannot).
Wherever possible you should strongly prefer the native expression API to achieve the best performance.
2594 2595 2596 2597 2598 2599 2600 2601 |
# File 'lib/polars/data_frame.rb', line 2594 def map_rows(return_dtype: nil, inference_size: 256, &f) out, is_df = _df.map_rows(f, return_dtype, inference_size) if is_df _from_rbdf(out) else _from_rbdf(Utils.wrap_s(out).to_frame._df) end end |
#max ⇒ DataFrame
Aggregate the columns of this DataFrame to their maximum value.
4009 4010 4011 |
# File 'lib/polars/data_frame.rb', line 4009 def max lazy.max.collect(_eager: true) end |
#max_horizontal ⇒ Series
Get the maximum value horizontally across columns.
4033 4034 4035 |
# File 'lib/polars/data_frame.rb', line 4033 def max_horizontal select(max: F.max_horizontal(F.all)).to_series end |
#mean ⇒ DataFrame
Aggregate the columns of this DataFrame to their mean value.
4165 4166 4167 |
# File 'lib/polars/data_frame.rb', line 4165 def mean lazy.mean.collect(_eager: true) end |
#mean_horizontal(ignore_nulls: true) ⇒ Series
Take the mean of all values horizontally across columns.
4193 4194 4195 4196 4197 |
# File 'lib/polars/data_frame.rb', line 4193 def mean_horizontal(ignore_nulls: true) select( mean: F.mean_horizontal(F.all, ignore_nulls: ignore_nulls) ).to_series end |
#median ⇒ DataFrame
Aggregate the columns of this DataFrame to their median value.
4303 4304 4305 |
# File 'lib/polars/data_frame.rb', line 4303 def median lazy.median.collect(_eager: true) end |
#merge_sorted(other, key) ⇒ DataFrame
Take two sorted DataFrames and merge them by the sorted key.
The output of this operation will also be sorted. It is the callers responsibility that the frames are sorted by that key otherwise the output will not make sense.
The schemas of both DataFrames must be equal.
5100 5101 5102 |
# File 'lib/polars/data_frame.rb', line 5100 def merge_sorted(other, key) lazy.merge_sorted(other.lazy, key).collect(_eager: true) end |
#min ⇒ DataFrame
Aggregate the columns of this DataFrame to their minimum value.
4059 4060 4061 |
# File 'lib/polars/data_frame.rb', line 4059 def min lazy.min.collect(_eager: true) end |
#min_horizontal ⇒ Series
Get the minimum value horizontally across columns.
4083 4084 4085 |
# File 'lib/polars/data_frame.rb', line 4083 def min_horizontal select(min: F.min_horizontal(F.all)).to_series end |
#n_chunks(strategy: "first") ⇒ Object
Get number of chunks used by the ChunkedArrays of this DataFrame.
3977 3978 3979 3980 3981 3982 3983 3984 3985 |
# File 'lib/polars/data_frame.rb', line 3977 def n_chunks(strategy: "first") if strategy == "first" _df.n_chunks elsif strategy == "all" get_columns.map(&:n_chunks) else raise ArgumentError, "Strategy: '{strategy}' not understood. Choose one of {{'first', 'all'}}" end end |
#n_unique(subset: nil) ⇒ DataFrame
Return the number of unique rows, or the number of unique row-subsets.
4476 4477 4478 4479 4480 4481 4482 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 |
# File 'lib/polars/data_frame.rb', line 4476 def n_unique(subset: nil) if subset.is_a?(StringIO) subset = [Polars.col(subset)] elsif subset.is_a?(Expr) subset = [subset] end if subset.is_a?(::Array) && subset.length == 1 expr = Utils.wrap_expr(Utils.parse_into_expression(subset[0], str_as_lit: false)) else struct_fields = subset.nil? ? Polars.all : subset expr = Polars.struct(struct_fields) end df = lazy.select(expr.n_unique).collect df.is_empty ? 0 : df.row(0)[0] end |
#null_count ⇒ DataFrame
Create a new DataFrame that shows the null counts per column.
4526 4527 4528 |
# File 'lib/polars/data_frame.rb', line 4526 def null_count _from_rbdf(_df.null_count) end |
#partition_by(groups, maintain_order: true, include_key: true, as_dict: false) ⇒ Object
Split into multiple DataFrames partitioned by groups.
3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 |
# File 'lib/polars/data_frame.rb', line 3590 def partition_by(groups, maintain_order: true, include_key: true, as_dict: false) if groups.is_a?(::String) groups = [groups] elsif !groups.is_a?(::Array) groups = Array(groups) end if as_dict out = {} if groups.length == 1 _df.partition_by(groups, maintain_order, include_key).each do |df| df = _from_rbdf(df) out[df[groups][0, 0]] = df end else _df.partition_by(groups, maintain_order, include_key).each do |df| df = _from_rbdf(df) out[df[groups].row(0)] = df end end out else _df.partition_by(groups, maintain_order, include_key).map { |df| _from_rbdf(df) } end end |
#pipe(func, *args, **kwargs, &block) ⇒ Object
It is recommended to use LazyFrame when piping operations, in order to fully take advantage of query optimization and parallelization. See #lazy.
Offers a structured way to apply a sequence of user-defined functions (UDFs).
1742 1743 1744 |
# File 'lib/polars/data_frame.rb', line 1742 def pipe(func, *args, **kwargs, &block) func.call(self, *args, **kwargs, &block) end |
#pivot(on, index: nil, values: nil, aggregate_function: nil, maintain_order: true, sort_columns: false, separator: "_") ⇒ DataFrame
Create a spreadsheet-style pivot table as a DataFrame.
3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 |
# File 'lib/polars/data_frame.rb', line 3281 def pivot( on, index: nil, values: nil, aggregate_function: nil, maintain_order: true, sort_columns: false, separator: "_" ) index = Utils.(self, index) on = Utils.(self, on) if !values.nil? values = Utils.(self, values) end if aggregate_function.is_a?(::String) case aggregate_function when "first" aggregate_expr = F.element.first._rbexpr when "sum" aggregate_expr = F.element.sum._rbexpr when "max" aggregate_expr = F.element.max._rbexpr when "min" aggregate_expr = F.element.min._rbexpr when "mean" aggregate_expr = F.element.mean._rbexpr when "median" aggregate_expr = F.element.median._rbexpr when "last" aggregate_expr = F.element.last._rbexpr when "len" aggregate_expr = F.len._rbexpr when "count" warn "`aggregate_function: \"count\"` input for `pivot` is deprecated. Use `aggregate_function: \"len\"` instead." aggregate_expr = F.len._rbexpr else raise ArgumentError, "Argument aggregate fn: '#{aggregate_fn}' was not expected." end elsif aggregate_function.nil? aggregate_expr = nil else aggregate_expr = aggregate_function._rbexpr end _from_rbdf( _df.pivot_expr( on, index, values, maintain_order, sort_columns, aggregate_expr, separator ) ) end |
#plot(x = nil, y = nil, type: nil, group: nil, stacked: nil) ⇒ Vega::LiteChart Originally defined in module Plot
Plot data.
#product ⇒ DataFrame
Aggregate the columns of this DataFrame to their product values.
4329 4330 4331 |
# File 'lib/polars/data_frame.rb', line 4329 def product select(Polars.all.product) end |
#quantile(quantile, interpolation: "nearest") ⇒ DataFrame
Aggregate the columns of this DataFrame to their quantile value.
4360 4361 4362 |
# File 'lib/polars/data_frame.rb', line 4360 def quantile(quantile, interpolation: "nearest") lazy.quantile(quantile, interpolation: interpolation).collect(_eager: true) end |
#rechunk ⇒ DataFrame
This will make sure all subsequent operations have optimal and predictable performance.
4500 4501 4502 |
# File 'lib/polars/data_frame.rb', line 4500 def rechunk _from_rbdf(_df.rechunk) end |
#rename(mapping, strict: true) ⇒ DataFrame
Rename column names.
1189 1190 1191 |
# File 'lib/polars/data_frame.rb', line 1189 def rename(mapping, strict: true) lazy.rename(mapping, strict: strict).collect(no_optimization: true) end |
#replace(column, new_col) ⇒ DataFrame
Replace a column by a new Series.
1543 1544 1545 1546 |
# File 'lib/polars/data_frame.rb', line 1543 def replace(column, new_col) _df.replace(column.to_s, new_col._s) self end |
#replace_column(index, series) ⇒ DataFrame Also known as: replace_at_idx
Replace a column at an index location.
1409 1410 1411 1412 1413 1414 1415 |
# File 'lib/polars/data_frame.rb', line 1409 def replace_column(index, series) if index < 0 index = columns.length + index end _df.replace_column(index, series._s) self end |
#reverse ⇒ DataFrame
Reverse the DataFrame.
1154 1155 1156 |
# File 'lib/polars/data_frame.rb', line 1154 def reverse select(Polars.col("*").reverse) end |
#rolling(index_column:, period:, offset: nil, closed: "right", by: nil) ⇒ RollingGroupBy Also known as: groupby_rolling, group_by_rolling
Create rolling groups based on a time column.
Also works for index values of type :i32
or :i64
.
Different from a dynamic_group_by
the windows are now determined by the
individual values and are not of constant intervals. For constant intervals use
group_by_dynamic
The period
and offset
arguments are created either from a timedelta, or
by using the following string language:
- 1ns (1 nanosecond)
- 1us (1 microsecond)
- 1ms (1 millisecond)
- 1s (1 second)
- 1m (1 minute)
- 1h (1 hour)
- 1d (1 day)
- 1w (1 week)
- 1mo (1 calendar month)
- 1y (1 calendar year)
- 1i (1 index count)
Or combine them: "3d12h4m25s" # 3 days, 12 hours, 4 minutes, and 25 seconds
In case of a group_by_rolling on an integer column, the windows are defined by:
- "1i" # length 1
- "10i" # length 10
1907 1908 1909 1910 1911 1912 1913 1914 1915 |
# File 'lib/polars/data_frame.rb', line 1907 def rolling( index_column:, period:, offset: nil, closed: "right", by: nil ) RollingGroupBy.new(self, index_column, period, offset, closed, by) end |
#row(index = nil, by_predicate: nil, named: false) ⇒ Object
The index
and by_predicate
params are mutually exclusive. Additionally,
to ensure clarity, the by_predicate
parameter must be supplied by keyword.
When using by_predicate
it is an error condition if anything other than
one row is returned; more than one row raises TooManyRowsReturned
, and
zero rows will raise NoRowsReturned
(both inherit from RowsException
).
Get a row as tuple, either by index or by predicate.
4721 4722 4723 4724 4725 4726 4727 4728 4729 4730 4731 4732 4733 4734 4735 4736 4737 4738 4739 4740 4741 4742 4743 4744 4745 4746 4747 4748 4749 4750 4751 4752 4753 4754 4755 |
# File 'lib/polars/data_frame.rb', line 4721 def row(index = nil, by_predicate: nil, named: false) if !index.nil? && !by_predicate.nil? raise ArgumentError, "Cannot set both 'index' and 'by_predicate'; mutually exclusive" elsif index.is_a?(Expr) raise TypeError, "Expressions should be passed to the 'by_predicate' param" end if !index.nil? row = _df.row_tuple(index) if named columns.zip(row).to_h else row end elsif !by_predicate.nil? if !by_predicate.is_a?(Expr) raise TypeError, "Expected by_predicate to be an expression; found #{by_predicate.class.name}" end rows = filter(by_predicate).rows n_rows = rows.length if n_rows > 1 raise TooManyRowsReturned, "Predicate #{by_predicate} returned #{n_rows} rows" elsif n_rows == 0 raise NoRowsReturned, "Predicate #{by_predicate} returned no rows" end row = rows[0] if named columns.zip(row).to_h else row end else raise ArgumentError, "One of 'index' or 'by_predicate' must be set" end end |
#rows(named: false) ⇒ Array
Convert columnar data to rows as Ruby arrays.
4778 4779 4780 4781 4782 4783 4784 4785 4786 4787 |
# File 'lib/polars/data_frame.rb', line 4778 def rows(named: false) if named columns = self.columns _df.row_tuples.map do |v| columns.zip(v).to_h end else _df.row_tuples end end |
#sample(n: nil, frac: nil, with_replacement: false, shuffle: false, seed: nil) ⇒ DataFrame
Sample from this DataFrame.
4566 4567 4568 4569 4570 4571 4572 4573 4574 4575 4576 4577 4578 4579 4580 4581 4582 4583 4584 4585 4586 4587 4588 4589 4590 4591 4592 |
# File 'lib/polars/data_frame.rb', line 4566 def sample( n: nil, frac: nil, with_replacement: false, shuffle: false, seed: nil ) if !n.nil? && !frac.nil? raise ArgumentError, "cannot specify both `n` and `frac`" end if n.nil? && !frac.nil? frac = Series.new("frac", [frac]) unless frac.is_a?(Series) return _from_rbdf( _df.sample_frac(frac._s, with_replacement, shuffle, seed) ) end if n.nil? n = 1 end n = Series.new("", [n]) unless n.is_a?(Series) _from_rbdf(_df.sample_n(n._s, with_replacement, shuffle, seed)) end |
#schema ⇒ Hash
Get the schema.
216 217 218 |
# File 'lib/polars/data_frame.rb', line 216 def schema columns.zip(dtypes).to_h end |
#select(*exprs, **named_exprs) ⇒ DataFrame
Select columns from this DataFrame.
3841 3842 3843 |
# File 'lib/polars/data_frame.rb', line 3841 def select(*exprs, **named_exprs) lazy.select(*exprs, **named_exprs).collect(_eager: true) end |
#set_sorted(column, descending: false) ⇒ DataFrame
Indicate that one or multiple columns are sorted.
5112 5113 5114 5115 5116 5117 5118 5119 |
# File 'lib/polars/data_frame.rb', line 5112 def set_sorted( column, descending: false ) lazy .set_sorted(column, descending: descending) .collect(no_optimization: true) end |
#shape ⇒ Array
Get the shape of the DataFrame.
95 96 97 |
# File 'lib/polars/data_frame.rb', line 95 def shape _df.shape end |
#shift(n, fill_value: nil) ⇒ DataFrame
Shift values by the given period.
3659 3660 3661 |
# File 'lib/polars/data_frame.rb', line 3659 def shift(n, fill_value: nil) lazy.shift(n, fill_value: fill_value).collect(_eager: true) end |
#shift_and_fill(periods, fill_value) ⇒ DataFrame
Shift the values by a given period and fill the resulting null values.
3692 3693 3694 |
# File 'lib/polars/data_frame.rb', line 3692 def shift_and_fill(periods, fill_value) shift(periods, fill_value: fill_value) end |
#shrink_to_fit(in_place: false) ⇒ DataFrame
Shrink DataFrame memory usage.
Shrinks to fit the exact capacity needed to hold the data.
4873 4874 4875 4876 4877 4878 4879 4880 4881 4882 |
# File 'lib/polars/data_frame.rb', line 4873 def shrink_to_fit(in_place: false) if in_place _df.shrink_to_fit self else df = clone df._df.shrink_to_fit df end end |
#slice(offset, length = nil) ⇒ DataFrame
Get a slice of this DataFrame.
1577 1578 1579 1580 1581 1582 |
# File 'lib/polars/data_frame.rb', line 1577 def slice(offset, length = nil) if !length.nil? && length < 0 length = height - offset + length end _from_rbdf(_df.slice(offset, length)) end |
#sort(by, reverse: false, nulls_last: false) ⇒ DataFrame
Sort the DataFrame by column.
1466 1467 1468 1469 1470 |
# File 'lib/polars/data_frame.rb', line 1466 def sort(by, reverse: false, nulls_last: false) lazy .sort(by, reverse: reverse, nulls_last: nulls_last) .collect(no_optimization: true) end |
#sort!(by, reverse: false, nulls_last: false) ⇒ DataFrame
Sort the DataFrame by column in-place.
1482 1483 1484 |
# File 'lib/polars/data_frame.rb', line 1482 def sort!(by, reverse: false, nulls_last: false) self._df = sort(by, reverse: reverse, nulls_last: nulls_last)._df end |
#std(ddof: 1) ⇒ DataFrame
Aggregate the columns of this DataFrame to their standard deviation value.
4236 4237 4238 |
# File 'lib/polars/data_frame.rb', line 4236 def std(ddof: 1) lazy.std(ddof: ddof).collect(_eager: true) end |
#sum ⇒ DataFrame
Aggregate the columns of this DataFrame to their sum value.
4109 4110 4111 |
# File 'lib/polars/data_frame.rb', line 4109 def sum lazy.sum.collect(_eager: true) end |
#sum_horizontal(ignore_nulls: true) ⇒ Series
Sum all values horizontally across columns.
4137 4138 4139 4140 4141 |
# File 'lib/polars/data_frame.rb', line 4137 def sum_horizontal(ignore_nulls: true) select( sum: F.sum_horizontal(F.all, ignore_nulls: ignore_nulls) ).to_series end |
#tail(n = 5) ⇒ DataFrame
Get the last n
rows.
1672 1673 1674 |
# File 'lib/polars/data_frame.rb', line 1672 def tail(n = 5) _from_rbdf(_df.tail(n)) end |
#to_a ⇒ Array
Returns an array representing the DataFrame
333 334 335 |
# File 'lib/polars/data_frame.rb', line 333 def to_a rows(named: true) end |
#to_csv(**options) ⇒ String
Write to comma-separated values (CSV) string.
800 801 802 |
# File 'lib/polars/data_frame.rb', line 800 def to_csv(**) write_csv(**) end |
#to_dummies(columns: nil, separator: "_", drop_first: false) ⇒ DataFrame
Get one hot encoded dummy variables.
4391 4392 4393 4394 4395 4396 |
# File 'lib/polars/data_frame.rb', line 4391 def to_dummies(columns: nil, separator: "_", drop_first: false) if columns.is_a?(::String) columns = [columns] end _from_rbdf(_df.to_dummies(columns, separator, drop_first)) end |
#to_h(as_series: true) ⇒ Hash
Convert DataFrame to a hash mapping column name to values.
521 522 523 524 525 526 527 |
# File 'lib/polars/data_frame.rb', line 521 def to_h(as_series: true) if as_series get_columns.to_h { |s| [s.name, s] } else get_columns.to_h { |s| [s.name, s.to_a] } end end |
#to_hashes ⇒ Array
Convert every row to a dictionary.
Note that this is slow.
540 541 542 543 544 545 546 547 |
# File 'lib/polars/data_frame.rb', line 540 def to_hashes rbdf = _df names = columns height.times.map do |i| names.zip(rbdf.row_tuple(i)).to_h end end |
#to_numo ⇒ Numo::NArray
Convert DataFrame to a 2D Numo array.
This operation clones data.
561 562 563 564 565 566 567 568 |
# File 'lib/polars/data_frame.rb', line 561 def to_numo out = _df.to_numo if out.nil? Numo::NArray.vstack(width.times.map { |i| to_series(i).to_numo }).transpose else out end end |
#to_s ⇒ String Also known as: inspect
Returns a string representing the DataFrame.
325 326 327 |
# File 'lib/polars/data_frame.rb', line 325 def to_s _df.to_s end |
#to_series(index = 0) ⇒ Series
Select column as Series at index location.
596 597 598 599 600 601 |
# File 'lib/polars/data_frame.rb', line 596 def to_series(index = 0) if index < 0 index = columns.length + index end Utils.wrap_s(_df.select_at_idx(index)) end |
#to_struct(name) ⇒ Series
Convert a DataFrame
to a Series
of type Struct
.
5015 5016 5017 |
# File 'lib/polars/data_frame.rb', line 5015 def to_struct(name) Utils.wrap_s(_df.to_struct(name)) end |
#transpose(include_header: false, header_name: "column", column_names: nil) ⇒ DataFrame
This is a very expensive operation. Perhaps you can do it differently.
Transpose a DataFrame over the diagonal.
1126 1127 1128 1129 |
# File 'lib/polars/data_frame.rb', line 1126 def transpose(include_header: false, header_name: "column", column_names: nil) keep_names_as = include_header ? header_name : nil _from_rbdf(_df.transpose(keep_names_as, column_names)) end |
#unique(maintain_order: true, subset: nil, keep: "first") ⇒ DataFrame
Note that this fails if there is a column of type List
in the DataFrame or
subset.
Drop duplicate rows from this DataFrame.
4436 4437 4438 4439 4440 4441 4442 4443 |
# File 'lib/polars/data_frame.rb', line 4436 def unique(maintain_order: true, subset: nil, keep: "first") self._from_rbdf( lazy .unique(maintain_order: maintain_order, subset: subset, keep: keep) .collect(no_optimization: true) ._df ) end |
#unnest(names) ⇒ DataFrame
Decompose a struct into its fields.
The fields will be inserted into the DataFrame
on the location of the
struct
type.
5051 5052 5053 5054 5055 5056 |
# File 'lib/polars/data_frame.rb', line 5051 def unnest(names) if names.is_a?(::String) names = [names] end _from_rbdf(_df.unnest(names)) end |
#unpivot(on, index: nil, variable_name: nil, value_name: nil) ⇒ DataFrame Also known as: melt
Unpivot a DataFrame from wide to long format.
Optionally leaves identifiers set.
This function is useful to massage a DataFrame into a format where one or more columns are identifier variables (index) while all other columns, considered measured variables (on), are "unpivoted" to the row axis leaving just two non-identifier columns, 'variable' and 'value'.
3383 3384 3385 3386 3387 3388 |
# File 'lib/polars/data_frame.rb', line 3383 def unpivot(on, index: nil, variable_name: nil, value_name: nil) on = on.nil? ? [] : Utils.(self, on) index = index.nil? ? [] : Utils.(self, index) _from_rbdf(_df.unpivot(on, index, value_name, variable_name)) end |
#unstack(step:, how: "vertical", columns: nil, fill_values: nil) ⇒ DataFrame
This functionality is experimental and may be subject to changes without it being considered a breaking change.
Unstack a long table to a wide form without doing an aggregation.
This can be much faster than a pivot, because it can skip the grouping phase.
3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 |
# File 'lib/polars/data_frame.rb', line 3462 def unstack(step:, how: "vertical", columns: nil, fill_values: nil) if !columns.nil? df = select(columns) else df = self end height = df.height if how == "vertical" n_rows = step n_cols = (height / n_rows.to_f).ceil else n_cols = step n_rows = (height / n_cols.to_f).ceil end n_fill = n_cols * n_rows - height if n_fill > 0 if !fill_values.is_a?(::Array) fill_values = [fill_values] * df.width end df = df.select( df.get_columns.zip(fill_values).map do |s, next_fill| s.extend_constant(next_fill, n_fill) end ) end if how == "horizontal" df = ( df.with_column( (Polars.arange(0, n_cols * n_rows, eager: true) % n_cols).alias( "__sort_order" ) ) .sort("__sort_order") .drop("__sort_order") ) end zfill_val = Math.log10(n_cols).floor + 1 slices = df.get_columns.flat_map do |s| n_cols.times.map do |slice_nbr| s.slice(slice_nbr * n_rows, n_rows).alias("%s_%0#{zfill_val}d" % [s.name, slice_nbr]) end end _from_rbdf(DataFrame.new(slices)._df) end |
#upsample(time_column:, every:, by: nil, maintain_order: false) ⇒ DataFrame
Upsample a DataFrame at a regular frequency.
The every
and offset
arguments are created with
the following string language:
- 1ns (1 nanosecond)
- 1us (1 microsecond)
- 1ms (1 millisecond)
- 1s (1 second)
- 1m (1 minute)
- 1h (1 hour)
- 1d (1 day)
- 1w (1 week)
- 1mo (1 calendar month)
- 1y (1 calendar year)
- 1i (1 index count)
Or combine them: "3d12h4m25s" # 3 days, 12 hours, 4 minutes, and 25 seconds
2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 |
# File 'lib/polars/data_frame.rb', line 2239 def upsample( time_column:, every:, by: nil, maintain_order: false ) if by.nil? by = [] end if by.is_a?(::String) by = [by] end every = Utils.parse_as_duration_string(every) _from_rbdf( _df.upsample(by, time_column, every, maintain_order) ) end |
#var(ddof: 1) ⇒ DataFrame
Aggregate the columns of this DataFrame to their variance value.
4277 4278 4279 |
# File 'lib/polars/data_frame.rb', line 4277 def var(ddof: 1) lazy.var(ddof: ddof).collect(_eager: true) end |
#vstack(df, in_place: false) ⇒ DataFrame
Grow this DataFrame vertically by stacking a DataFrame to it.
2729 2730 2731 2732 2733 2734 2735 2736 |
# File 'lib/polars/data_frame.rb', line 2729 def vstack(df, in_place: false) if in_place _df.vstack_mut(df._df) self else _from_rbdf(_df.vstack(df._df)) end end |
#width ⇒ Integer
Get the width of the DataFrame.
122 123 124 |
# File 'lib/polars/data_frame.rb', line 122 def width _df.width end |
#with_column(column) ⇒ DataFrame
Return a new DataFrame with the column added or replaced.
2644 2645 2646 2647 2648 |
# File 'lib/polars/data_frame.rb', line 2644 def with_column(column) lazy .with_column(column) .collect(no_optimization: true, string_cache: false) end |
#with_columns(*exprs, **named_exprs) ⇒ DataFrame
Add columns to this DataFrame.
Added columns will replace existing columns with the same name.
3953 3954 3955 |
# File 'lib/polars/data_frame.rb', line 3953 def with_columns(*exprs, **named_exprs) lazy.with_columns(*exprs, **named_exprs).collect(_eager: true) end |
#with_row_index(name: "index", offset: 0) ⇒ DataFrame Also known as: with_row_count
Add a column at index 0 that counts the rows.
1774 1775 1776 |
# File 'lib/polars/data_frame.rb', line 1774 def with_row_index(name: "index", offset: 0) _from_rbdf(_df.with_row_index(name, offset)) end |
#write_avro(file, compression = "uncompressed", name: "") ⇒ nil
Write to Apache Avro file.
812 813 814 815 816 817 818 819 820 821 822 823 824 |
# File 'lib/polars/data_frame.rb', line 812 def write_avro(file, compression = "uncompressed", name: "") if compression.nil? compression = "uncompressed" end if Utils.pathlike?(file) file = Utils.normalize_filepath(file) end if name.nil? name = "" end _df.write_avro(file, compression, name) end |
#write_csv(file = nil, has_header: true, include_header: nil, sep: ",", quote: '"', batch_size: 1024, datetime_format: nil, date_format: nil, time_format: nil, float_precision: nil, null_value: nil) ⇒ String?
Write to comma-separated values (CSV) file.
737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 |
# File 'lib/polars/data_frame.rb', line 737 def write_csv( file = nil, has_header: true, include_header: nil, sep: ",", quote: '"', batch_size: 1024, datetime_format: nil, date_format: nil, time_format: nil, float_precision: nil, null_value: nil ) include_header = has_header if include_header.nil? if sep.length > 1 raise ArgumentError, "only single byte separator is allowed" elsif quote.length > 1 raise ArgumentError, "only single byte quote char is allowed" elsif null_value == "" null_value = nil end if file.nil? buffer = StringIO.new buffer.set_encoding(Encoding::BINARY) _df.write_csv( buffer, include_header, sep.ord, quote.ord, batch_size, datetime_format, date_format, time_format, float_precision, null_value ) return buffer.string.force_encoding(Encoding::UTF_8) end if Utils.pathlike?(file) file = Utils.normalize_filepath(file) end _df.write_csv( file, include_header, sep.ord, quote.ord, batch_size, datetime_format, date_format, time_format, float_precision, null_value, ) nil end |
#write_delta(target, mode: "error", storage_options: nil, delta_write_options: nil, delta_merge_options: nil) ⇒ nil
Write DataFrame as delta table.
990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 |
# File 'lib/polars/data_frame.rb', line 990 def write_delta( target, mode: "error", storage_options: nil, delta_write_options: nil, delta_merge_options: nil ) Polars.send(:_check_if_delta_available) if Utils.pathlike?(target) target = Polars.send(:_resolve_delta_lake_uri, target.to_s, strict: false) end data = self if mode == "merge" if .nil? msg = "You need to pass delta_merge_options with at least a given predicate for `MERGE` to work." raise ArgumentError, msg end if target.is_a?(::String) dt = DeltaLake::Table.new(target, storage_options: ) else dt = target end predicate = .delete(:predicate) dt.merge(data, predicate, **) else ||= {} DeltaLake.write( target, data, mode: mode, storage_options: , ** ) end end |
#write_ipc(file, compression: "uncompressed", compat_level: nil, storage_options: nil, retries: 2) ⇒ nil
Write to Arrow IPC binary stream or Feather file.
834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 |
# File 'lib/polars/data_frame.rb', line 834 def write_ipc( file, compression: "uncompressed", compat_level: nil, storage_options: nil, retries: 2 ) return_bytes = file.nil? if return_bytes file = StringIO.new file.set_encoding(Encoding::BINARY) end if Utils.pathlike?(file) file = Utils.normalize_filepath(file) end if compat_level.nil? compat_level = true end if compression.nil? compression = "uncompressed" end if &.any? = .to_a else = nil end _df.write_ipc(file, compression, compat_level, , retries) return_bytes ? file.string : nil end |
#write_ipc_stream(file, compression: "uncompressed", compat_level: nil) ⇒ Object
Write to Arrow IPC record batch stream.
See "Streaming format" in https://arrow.apache.org/docs/python/ipc.html.
889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 |
# File 'lib/polars/data_frame.rb', line 889 def write_ipc_stream( file, compression: "uncompressed", compat_level: nil ) return_bytes = file.nil? if return_bytes file = StringIO.new file.set_encoding(Encoding::BINARY) elsif Utils.pathlike?(file) file = Utils.normalize_filepath(file) end if compat_level.nil? compat_level = true end if compression.nil? compression = "uncompressed" end _df.write_ipc_stream(file, compression, compat_level) return_bytes ? file.string : nil end |
#write_json(file = nil, pretty: false, row_oriented: false) ⇒ nil
Serialize to JSON representation.
627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 |
# File 'lib/polars/data_frame.rb', line 627 def write_json( file = nil, pretty: false, row_oriented: false ) if Utils.pathlike?(file) file = Utils.normalize_filepath(file) end to_string_io = !file.nil? && file.is_a?(StringIO) if file.nil? || to_string_io buf = StringIO.new buf.set_encoding(Encoding::BINARY) _df.write_json(buf, pretty, row_oriented) json_bytes = buf.string json_str = json_bytes.force_encoding(Encoding::UTF_8) if to_string_io file.write(json_str) else return json_str end else _df.write_json(file, pretty, row_oriented) end nil end |
#write_ndjson(file = nil) ⇒ nil
Serialize to newline delimited JSON representation.
670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 |
# File 'lib/polars/data_frame.rb', line 670 def write_ndjson(file = nil) if Utils.pathlike?(file) file = Utils.normalize_filepath(file) end to_string_io = !file.nil? && file.is_a?(StringIO) if file.nil? || to_string_io buf = StringIO.new buf.set_encoding(Encoding::BINARY) _df.write_ndjson(buf) json_bytes = buf.string json_str = json_bytes.force_encoding(Encoding::UTF_8) if to_string_io file.write(json_str) else return json_str end else _df.write_ndjson(file) end nil end |
#write_parquet(file, compression: "zstd", compression_level: nil, statistics: false, row_group_size: nil, data_page_size: nil) ⇒ nil
Write to Apache Parquet file.
938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 |
# File 'lib/polars/data_frame.rb', line 938 def write_parquet( file, compression: "zstd", compression_level: nil, statistics: false, row_group_size: nil, data_page_size: nil ) if compression.nil? compression = "uncompressed" end if Utils.pathlike?(file) file = Utils.normalize_filepath(file) end if statistics == true statistics = { min: true, max: true, distinct_count: false, null_count: true } elsif statistics == false statistics = {} elsif statistics == "full" statistics = { min: true, max: true, distinct_count: true, null_count: true } end _df.write_parquet( file, compression, compression_level, statistics, row_group_size, data_page_size ) end |