Class: ActiveRecord::ConnectionAdapters::RedshiftbulkAdapter

Inherits:
AbstractAdapter
  • Object
show all
Defined in:
lib/active_record/connection_adapters/redshiftbulk_adapter.rb

Overview

The Redshift adapter works both with the native C (ruby.scripting.ca/postgres/) and the pure Ruby (available both as gem and from rubyforge.org/frs/?group_id=234&release_id=1944) drivers.

Options:

  • :host - Defaults to “localhost”.

  • :port - Defaults to 5432.

  • :username - Defaults to nothing.

  • :password - Defaults to nothing.

  • :database - The name of the database. No default, must be provided.

  • :schema_search_path - An optional schema search path for the connection given as a string of comma-separated schema names. This is backward-compatible with the :schema_order option.

  • :encoding - An optional client encoding that is used in a SET client_encoding TO <encoding> call on the connection.

Defined Under Namespace

Modules: Utils Classes: BindSubstitution, ExplainPrettyPrinter, StatementPool, TableDefinition

Constant Summary collapse

ADAPTER_NAME =
'Redshiftbulk'
NATIVE_DATABASE_TYPES =
{
  :primary_key => "bigint primary key",
  :identity    => "bigint identity primary key",
  :string      => { :name => "character varying", :limit => 255 },
  :text        => { :name => "text" },
  :integer     => { :name => "integer" },
  :float       => { :name => "float" },
  :decimal     => { :name => "decimal" },
  :datetime    => { :name => "timestamp" },
  :timestamp   => { :name => "timestamp" },
  :time        => { :name => "time" },
  :date        => { :name => "date" },
  :binary      => { :name => "bytea" },
  :boolean     => { :name => "boolean" },
  :xml         => { :name => "xml" },
  :tsvector    => { :name => "tsvector" }
}

Instance Method Summary collapse

Constructor Details

#initialize(connection, logger, connection_parameters, config) ⇒ RedshiftbulkAdapter

Initializes and connects a Redshift adapter.



349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 349

def initialize(connection, logger, connection_parameters, config)
  super(connection, logger)

  if config.fetch(:prepared_statements) { true }
    @visitor = Arel::Visitors::PostgreSQL.new self
  else
    @visitor = BindSubstitution.new self
  end

  connection_parameters.delete :prepared_statements

  @connection_parameters, @config = connection_parameters, config

  # @local_tz is initialized as nil to avoid warnings when connect tries to use it
  @local_tz = nil
  @table_alias_length = nil

  connect
  @statements = StatementPool.new @connection,
                                  config.fetch(:statement_limit) { 1000 }

  if redshift_version < 80002
    raise "Your version of Redshift (#{redshift_version}) is too old, please upgrade!"
  end

  @local_tz = execute('SHOW TIME ZONE', 'SCHEMA').first["TimeZone"]
end

Instance Method Details

#active?Boolean

Is this connection alive and ready for queries?

Returns:

  • (Boolean)


383
384
385
386
387
388
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 383

def active?
  @connection.query 'SELECT 1'
  true
rescue PGError
  false
end

#adapter_nameObject

Returns ‘Redshift’ as adapter name for identification purposes.



274
275
276
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 274

def adapter_name
  ADAPTER_NAME
end

#add_column(table_name, column_name, type, options = {}) ⇒ Object

Adds a new column to the named table. See TableDefinition#column for details of the options you can use.



1015
1016
1017
1018
1019
1020
1021
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 1015

def add_column(table_name, column_name, type, options = {})
  clear_cache!
  add_column_sql = "ALTER TABLE #{quote_table_name(table_name)} ADD COLUMN #{quote_column_name(column_name)} #{type_to_sql(type, options[:limit], options[:precision], options[:scale])}"
  add_column_options!(add_column_sql, options)

  execute add_column_sql
end

#add_indexObject



1054
1055
1056
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 1054

def add_index(*)
  # XXX nothing to do
end

#begin_db_transactionObject

Begins a transaction.



735
736
737
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 735

def begin_db_transaction
  execute "BEGIN"
end

#change_column(table_name, column_name, type, options = {}) ⇒ Object

Changes the column of a table.



1024
1025
1026
1027
1028
1029
1030
1031
1032
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 1024

def change_column(table_name, column_name, type, options = {})
  clear_cache!
  quoted_table_name = quote_table_name(table_name)

  execute "ALTER TABLE #{quoted_table_name} ALTER COLUMN #{quote_column_name(column_name)} TYPE #{type_to_sql(type, options[:limit], options[:precision], options[:scale])}"

  change_column_default(table_name, column_name, options[:default]) if options_include_default?(options)
  change_column_null(table_name, column_name, options[:null], options[:default]) if options.key?(:null)
end

#change_column_default(table_name, column_name, default) ⇒ Object

Changes the default value of a table column.



1035
1036
1037
1038
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 1035

def change_column_default(table_name, column_name, default)
  clear_cache!
  execute "ALTER TABLE #{quote_table_name(table_name)} ALTER COLUMN #{quote_column_name(column_name)} SET DEFAULT #{quote(default)}"
end

#change_column_null(table_name, column_name, null, default = nil) ⇒ Object



1040
1041
1042
1043
1044
1045
1046
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 1040

def change_column_null(table_name, column_name, null, default = nil)
  clear_cache!
  unless null || default.nil?
    execute("UPDATE #{quote_table_name(table_name)} SET #{quote_column_name(column_name)}=#{quote(default)} WHERE #{quote_column_name(column_name)} IS NULL")
  end
  execute("ALTER TABLE #{quote_table_name(table_name)} ALTER #{quote_column_name(column_name)} #{null ? 'DROP' : 'SET'} NOT NULL")
end

#clear_cache!Object

Clears the prepared statements cache.



378
379
380
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 378

def clear_cache!
  @statements.clear
end

#columns(table_name, name = nil) ⇒ Object

Returns the list of all column definitions for a table.



860
861
862
863
864
865
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 860

def columns(table_name, name = nil)
  # Limit, precision, and scale are all handled by the superclass.
  column_definitions(table_name).collect do |column_name, type, default, notnull|
    RedshiftColumn.new(column_name, default, type, notnull == 'f', @config[:read_timezone])
  end
end

#commit_db_transactionObject

Commits a transaction.



740
741
742
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 740

def commit_db_transaction
  execute "COMMIT"
end

#create_database(name, options = {}) ⇒ Object

Create a new Redshift database. Options include :owner, :template, :encoding, :tablespace, and :connection_limit (note that MySQL uses :charset while Redshift uses :encoding).

Example:

create_database config[:database], config
create_database 'foo_development', :encoding => 'unicode'


785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 785

def create_database(name, options = {})
  options = options.reverse_merge(:encoding => "utf8")

  option_string = options.symbolize_keys.sum do |key, value|
    case key
    when :owner
      " OWNER = \"#{value}\""
    when :template
      " TEMPLATE = \"#{value}\""
    when :encoding
      " ENCODING = '#{value}'"
    when :tablespace
      " TABLESPACE = \"#{value}\""
    when :connection_limit
      " CONNECTION LIMIT = #{value}"
    else
      ""
    end
  end

  execute "CREATE DATABASE #{quote_table_name(name)}#{option_string}"
end

#create_savepointObject



753
754
755
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 753

def create_savepoint
  execute("SAVEPOINT #{current_savepoint_name}")
end

#current_databaseObject

Returns the current database name.



868
869
870
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 868

def current_database
  query('select current_database()', 'SCHEMA')[0][0]
end

#current_schemaObject

Returns the current schema name.



873
874
875
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 873

def current_schema
  query('SELECT current_schema', 'SCHEMA')[0][0]
end

#default_sequence_name(table_name, pk = nil) ⇒ Object

Returns the sequence name for a table’s primary key or some other specified key.



903
904
905
906
907
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 903

def default_sequence_name(table_name, pk = nil) #:nodoc:
  serial_sequence(table_name, pk || 'id').split('.').last
rescue ActiveRecord::StatementInvalid
  "#{table_name}_#{pk || 'id'}_seq"
end

#disable_referential_integrityObject

:nodoc:



553
554
555
556
557
558
559
560
561
562
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 553

def disable_referential_integrity #:nodoc:
  if supports_disable_referential_integrity? then
    execute(tables.collect { |name| "ALTER TABLE #{quote_table_name(name)} DISABLE TRIGGER ALL" }.join(";"))
  end
  yield
ensure
  if supports_disable_referential_integrity? then
    execute(tables.collect { |name| "ALTER TABLE #{quote_table_name(name)} ENABLE TRIGGER ALL" }.join(";"))
  end
end

#disconnect!Object

Disconnects from the database if already connected. Otherwise, this method does nothing.



405
406
407
408
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 405

def disconnect!
  clear_cache!
  @connection.close rescue nil
end

#distinct(columns, orders) ⇒ Object

Returns a SELECT DISTINCT clause for a given set of columns and a given ORDER BY clause.

Redshift requires the ORDER BY columns in the select list for distinct queries, and requires that the ORDER BY include the distinct column.

distinct("posts.id", "posts.created_at desc")


1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 1098

def distinct(columns, orders) #:nodoc:
  return "DISTINCT #{columns}" if orders.empty?

  # Construct a clean list of column names from the ORDER BY clause, removing
  # any ASC/DESC modifiers
  order_columns = orders.collect { |s| s.gsub(/\s+(ASC|DESC)\s*(NULLS\s+(FIRST|LAST)\s*)?/i, '') }
  order_columns.delete_if { |c| c.blank? }
  order_columns = order_columns.zip((0...order_columns.size).to_a).map { |s,i| "#{s} AS alias_#{i}" }

  "DISTINCT #{columns}, #{order_columns * ', '}"
end

#drop_database(name) ⇒ Object

Drops a Redshift database.

Example:

drop_database 'matt_development'


812
813
814
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 812

def drop_database(name) #:nodoc:
  execute "DROP DATABASE IF EXISTS #{quote_table_name(name)}"
end

#encodingObject

Returns the current database encoding format.



878
879
880
881
882
883
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 878

def encoding
  query(<<-end_sql, 'SCHEMA')[0][0]
    SELECT pg_encoding_to_char(pg_database.encoding) FROM pg_database
    WHERE pg_database.datname LIKE '#{current_database}'
  end_sql
end

#escape_bytea(value) ⇒ Object

Escapes binary strings for bytea input to the database.



450
451
452
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 450

def escape_bytea(value)
  @connection.escape_bytea(value) if value
end

#exec_delete(sql, name = 'SQL', binds = []) ⇒ Object Also known as: exec_update



706
707
708
709
710
711
712
713
714
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 706

def exec_delete(sql, name = 'SQL', binds = [])
  log(sql, name, binds) do
    result = binds.empty? ? exec_no_cache(sql, binds) :
                            exec_cache(sql, binds)
    affected = result.cmd_tuples
    result.clear
    affected
  end
end

#exec_query(sql, name = 'SQL', binds = []) ⇒ Object



695
696
697
698
699
700
701
702
703
704
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 695

def exec_query(sql, name = 'SQL', binds = [])
  log(sql, name, binds) do
    result = binds.empty? ? exec_no_cache(sql, binds) :
                            exec_cache(sql, binds)

    ret = ActiveRecord::Result.new(result.fields, result_as_array(result))
    result.clear
    return ret
  end
end

#execute(sql, name = nil) ⇒ Object

Executes an SQL statement, returning a PGresult object on success or raising a PGError exception otherwise.



685
686
687
688
689
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 685

def execute(sql, name = nil)
  log(sql, name) do
    @connection.async_exec(sql)
  end
end

#explain(arel, binds = []) ⇒ Object

DATABASE STATEMENTS ======================================



566
567
568
569
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 566

def explain(arel, binds = [])
  sql = "EXPLAIN #{to_sql(arel, binds)}"
  ExplainPrettyPrinter.new.pp(exec_query(sql, 'EXPLAIN', binds))
end

#index_name_lengthObject



1064
1065
1066
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 1064

def index_name_length
  63
end

#indexes(table_name, name = nil) ⇒ Object

Returns an array of indexes for the given table.



855
856
857
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 855

def indexes(table_name, name = nil)
  []
end

#insert_sql(sql, name = nil, pk = nil, id_value = nil, sequence_name = nil) ⇒ Object

Executes an INSERT query and returns the new record’s ID



615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 615

def insert_sql(sql, name = nil, pk = nil, id_value = nil, sequence_name = nil)
  unless pk
    # Extract the table from the insert sql. Yuck.
    table_ref = extract_table_ref_from_insert_sql(sql)
    pk = primary_key(table_ref) if table_ref
  end
  
  if pk && use_insert_returning?
    select_value("#{sql} RETURNING #{quote_column_name(pk)}")
  elsif pk
    super
    last_insert_id_value(sequence_name || default_sequence_name(table_ref, pk))
  else
    super
  end
end

#native_database_typesObject

:nodoc:



410
411
412
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 410

def native_database_types #:nodoc:
  NATIVE_DATABASE_TYPES
end

#outside_transaction?Boolean

Returns:

  • (Boolean)


749
750
751
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 749

def outside_transaction?
  @connection.transaction_status == PGconn::PQTRANS_IDLE
end

#pk_and_sequence_for(table) ⇒ Object

Returns a table’s primary key and belonging sequence.



939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 939

def pk_and_sequence_for(table) #:nodoc:
  # First try looking for a sequence with a dependency on the
  # given table's primary key.
  result = query(<<-end_sql, 'SCHEMA')[0]
    SELECT attr.attname, seq.relname
    FROM pg_class      seq,
         pg_attribute  attr,
         pg_depend     dep,
         pg_namespace  name,
         pg_constraint cons
    WHERE seq.oid           = dep.objid
      AND seq.relkind       = 'S'
      AND attr.attrelid     = dep.refobjid
      AND attr.attnum       = dep.refobjsubid
      AND attr.attrelid     = cons.conrelid
      AND attr.attnum       = cons.conkey[1]
      AND cons.contype      = 'p'
      AND dep.refobjid      = '#{quote_table_name(table)}'::regclass
  end_sql

  if result.nil? or result.empty?
    result = query(<<-end_sql, 'SCHEMA')[0]
      SELECT attr.attname,
        CASE
          WHEN split_part(pg_get_expr(def.adbin, def.adrelid), '''', 2) ~ '.' THEN
            substr(split_part(pg_get_expr(def.adbin, def.adrelid), '''', 2),
                   strpos(split_part(pg_get_expr(def.adbin, def.adrelid), '''', 2), '.')+1)
          ELSE split_part(pg_get_expr(def.adbin, def.adrelid), '''', 2)
        END
      FROM pg_class       t
      JOIN pg_attribute   attr ON (t.oid = attrelid)
      JOIN pg_attrdef     def  ON (adrelid = attrelid AND adnum = attnum)
      JOIN pg_constraint  cons ON (conrelid = adrelid AND adnum = conkey[1])
      WHERE t.oid = '#{quote_table_name(table)}'::regclass
        AND cons.contype = 'p'
        AND pg_get_expr(def.adbin, def.adrelid) ~* 'nextval'
    end_sql
  end

  [result.first, result.last]
rescue
  nil
end

#primary_key(table) ⇒ Object

Returns just a table’s primary key



984
985
986
987
988
989
990
991
992
993
994
995
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 984

def primary_key(table)
  row = exec_query(<<-end_sql, 'SCHEMA').rows.first
    SELECT DISTINCT(attr.attname)
    FROM pg_attribute attr
    INNER JOIN pg_depend dep ON attr.attrelid = dep.refobjid AND attr.attnum = dep.refobjsubid
    INNER JOIN pg_constraint cons ON attr.attrelid = cons.conrelid AND attr.attnum = cons.conkey[1]
    WHERE cons.contype = 'p'
      AND dep.refobjid = '#{quote_table_name(table)}'::regclass
  end_sql

  row && row.first
end

#query(sql, name = nil) ⇒ Object

Queries the database and returns the results in an Array-like object



677
678
679
680
681
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 677

def query(sql, name = nil) #:nodoc:
  log(sql, name) do
    result_as_array @connection.async_exec(sql)
  end
end

#quote(value, column = nil) ⇒ Object

Quotes Redshift-specific data types for SQL input.



462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 462

def quote(value, column = nil) #:nodoc:
  return super unless column

  case value
  when Float
    return super unless value.infinite? && column.type == :datetime
    "'#{value.to_s.downcase}'"
  when Numeric
    return super unless column.sql_type == 'money'
    # Not truly string input, so doesn't require (or allow) escape string syntax.
    "'#{value}'"
  when String
    case column.sql_type
    when 'bytea' then "'#{escape_bytea(value)}'"
    when 'xml'   then "xml '#{quote_string(value)}'"
    when /^bit/
      case value
      when /^[01]*$/      then "B'#{value}'" # Bit-string notation
      when /^[0-9A-F]*$/i then "X'#{value}'" # Hexadecimal notation
      end
    else
      super
    end
  else
    super
  end
end

#quote_column_name(name) ⇒ Object

Quotes column names for use in SQL queries.



527
528
529
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 527

def quote_column_name(name) #:nodoc:
  PGconn.quote_ident(name.to_s)
end

#quote_string(s) ⇒ Object

Quotes strings for use in SQL input.



503
504
505
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 503

def quote_string(s) #:nodoc:
  @connection.escape(s)
end

#quote_table_name(name) ⇒ Object

Checks the following cases:

  • table_name

  • “table.name”

  • schema_name.table_name

  • schema_name.“table.name”

  • “schema.name”.table_name

  • “schema.name”.“table.name”



515
516
517
518
519
520
521
522
523
524
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 515

def quote_table_name(name)
  schema, name_part = extract_pg_identifier_from_name(name.to_s)

  unless name_part
    quote_column_name(schema)
  else
    table_name, name_part = extract_pg_identifier_from_name(name_part)
    "#{quote_column_name(schema)}.#{quote_column_name(table_name)}"
  end
end

#quoted_date(value) ⇒ Object

Quote date/time values for use in SQL input. Includes microseconds if the value is a Time responding to usec.



533
534
535
536
537
538
539
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 533

def quoted_date(value) #:nodoc:
  if value.acts_like?(:time) && value.respond_to?(:usec)
    "#{super}.#{sprintf("%06d", value.usec)}"
  else
    super
  end
end

#reconnect!Object

Close then reopen the connection.



391
392
393
394
395
396
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 391

def reconnect!
  clear_cache!
  @connection.reset
  @open_transactions = 0
  configure_connection
end

#recreate_database(name, options = {}) ⇒ Object

Drops the database specified on the name attribute and creates it again using the provided options.



773
774
775
776
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 773

def recreate_database(name, options = {}) #:nodoc:
  drop_database(name)
  create_database(name, options)
end

#release_savepointObject



761
762
763
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 761

def release_savepoint
  execute("RELEASE SAVEPOINT #{current_savepoint_name}")
end

#remove_index!(table_name, index_name) ⇒ Object

:nodoc:



1058
1059
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 1058

def remove_index!(table_name, index_name) #:nodoc:
end

#rename_column(table_name, column_name, new_column_name) ⇒ Object

Renames a column in a table.



1049
1050
1051
1052
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 1049

def rename_column(table_name, column_name, new_column_name)
  clear_cache!
  execute "ALTER TABLE #{quote_table_name(table_name)} RENAME COLUMN #{quote_column_name(column_name)} TO #{quote_column_name(new_column_name)}"
end

#rename_index(table_name, old_name, new_name) ⇒ Object



1061
1062
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 1061

def rename_index(table_name, old_name, new_name)
end

#rename_table(name, new_name) ⇒ Object

Renames a table. Also renames a table’s primary key sequence if the sequence name matches the Active Record default.

Example:

rename_table('octopuses', 'octopi')


1003
1004
1005
1006
1007
1008
1009
1010
1011
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 1003

def rename_table(name, new_name)
  clear_cache!
  execute "ALTER TABLE #{quote_table_name(name)} RENAME TO #{quote_table_name(new_name)}"
  pk, seq = pk_and_sequence_for(new_name)
  if seq == "#{name}_#{pk}_seq"
    new_seq = "#{new_name}_#{pk}_seq"
    execute "ALTER TABLE #{quote_table_name(seq)} RENAME TO #{quote_table_name(new_seq)}"
  end
end

#reset!Object



398
399
400
401
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 398

def reset!
  clear_cache!
  super
end

#reset_pk_sequence!(table, pk = nil, sequence = nil) ⇒ Object

Resets the sequence of a table’s primary key to the maximum value.



917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 917

def reset_pk_sequence!(table, pk = nil, sequence = nil) #:nodoc:
  unless pk and sequence
    default_pk, default_sequence = pk_and_sequence_for(table)

    pk ||= default_pk
    sequence ||= default_sequence
  end

  if @logger && pk && !sequence
    @logger.warn "#{table} has primary key #{pk} with no default sequence"
  end

  if pk && sequence
    quoted_sequence = quote_table_name(sequence)

    select_value <<-end_sql, 'SCHEMA'
      SELECT setval('#{quoted_sequence}', (SELECT COALESCE(MAX(#{quote_column_name pk})+(SELECT increment_by FROM #{quoted_sequence}), (SELECT min_value FROM #{quoted_sequence})) FROM #{quote_table_name(table)}), false)
    end_sql
  end
end

#result_as_array(res) ⇒ Object

create a 2D array representing the result set



634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 634

def result_as_array(res) #:nodoc:
  # check if we have any binary column and if they need escaping
  ftypes = Array.new(res.nfields) do |i|
    [i, res.ftype(i)]
  end

  rows = res.values
  return rows unless ftypes.any? { |_, x|
    x == BYTEA_COLUMN_TYPE_OID || x == MONEY_COLUMN_TYPE_OID
  }

  typehash = ftypes.group_by { |_, type| type }
  binaries = typehash[BYTEA_COLUMN_TYPE_OID] || []
  monies   = typehash[MONEY_COLUMN_TYPE_OID] || []

  rows.each do |row|
    # unescape string passed BYTEA field (OID == 17)
    binaries.each do |index, _|
      row[index] = unescape_bytea(row[index])
    end

    # If this is a money type column and there are any currency symbols,
    # then strip them off. Indeed it would be prettier to do this in
    # RedshiftColumn.string_to_decimal but would break form input
    # fields that call value_before_type_cast.
    monies.each do |index, _|
      data = row[index]
      # Because money output is formatted according to the locale, there are two
      # cases to consider (note the decimal separators):
      #  (1) $12,345,678.12
      #  (2) $12.345.678,12
      case data
      when /^-?\D+[\d,]+\.\d{2}$/  # (1)
        data.gsub!(/[^-\d.]/, '')
      when /^-?\D+[\d.]+,\d{2}$/  # (2)
        data.gsub!(/[^-\d,]/, '').sub!(/,/, '.')
      end
    end
  end
end

#rollback_db_transactionObject

Aborts a transaction.



745
746
747
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 745

def rollback_db_transaction
  execute "ROLLBACK"
end

#rollback_to_savepointObject



757
758
759
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 757

def rollback_to_savepoint
  execute("ROLLBACK TO SAVEPOINT #{current_savepoint_name}")
end

#schema_exists?(name) ⇒ Boolean

Returns true if schema exists.

Returns:

  • (Boolean)


846
847
848
849
850
851
852
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 846

def schema_exists?(name)
  exec_query(<<-SQL, 'SCHEMA').rows.first[0].to_i > 0
    SELECT COUNT(*)
    FROM pg_namespace
    WHERE nspname = '#{name}'
  SQL
end

#schema_search_pathObject

Returns the active schema search path.



898
899
900
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 898

def schema_search_path
  @schema_search_path ||= query('SHOW search_path', 'SCHEMA')[0][0]
end

#schema_search_path=(schema_csv) ⇒ Object

Sets the schema search path to a string of comma-separated schema names. Names beginning with $ have to be quoted (e.g. $user => ‘$user’). See: www.redshift.org/docs/current/static/ddl-schemas.html

This should be not be called manually but set in database.yml.



890
891
892
893
894
895
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 890

def schema_search_path=(schema_csv)
  if schema_csv
    execute("SET search_path TO #{schema_csv}", 'SCHEMA')
    @schema_search_path = schema_csv
  end
end

#select_rows(sql, name = nil) ⇒ Object

Executes a SELECT query and returns an array of rows. Each row is an array of field values.



610
611
612
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 610

def select_rows(sql, name = nil)
  select_raw(sql, name).last
end

#serial_sequence(table, column) ⇒ Object



909
910
911
912
913
914
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 909

def serial_sequence(table, column)
  result = exec_query(<<-eosql, 'SCHEMA')
    SELECT pg_get_serial_sequence('#{table}', '#{column}')
  eosql
  result.rows.first.first
end

#session_auth=(user) ⇒ Object

Set the authorized user for this session



542
543
544
545
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 542

def session_auth=(user)
  clear_cache!
  exec_query "SET SESSION AUTHORIZATION #{user}"
end

#sql_for_insert(sql, pk, id_value, sequence_name, binds) ⇒ Object



717
718
719
720
721
722
723
724
725
726
727
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 717

def sql_for_insert(sql, pk, id_value, sequence_name, binds)
  unless pk
    # Extract the table from the insert sql. Yuck.
    table_ref = extract_table_ref_from_insert_sql(sql)
    pk = primary_key(table_ref) if table_ref
  end

  sql = "#{sql} RETURNING #{quote_column_name(pk)}" if pk && use_insert_returning?

  [sql, binds]
end

#substitute_at(column, index) ⇒ Object



691
692
693
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 691

def substitute_at(column, index)
  Arel::Nodes::BindParam.new "$#{index + 1}"
end

#supports_ddl_transactions?Boolean

Returns:

  • (Boolean)


428
429
430
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 428

def supports_ddl_transactions?
  true
end

#supports_disable_referential_integrity?Boolean

REFERENTIAL INTEGRITY ====================================

Returns:

  • (Boolean)


549
550
551
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 549

def supports_disable_referential_integrity? #:nodoc:
  false
end

#supports_explain?Boolean

Returns true.

Returns:

  • (Boolean)


438
439
440
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 438

def supports_explain?
  true
end

#supports_import?Boolean

Returns:

  • (Boolean)


288
289
290
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 288

def supports_import?
  true
end

#supports_index_sort_order?Boolean

Returns:

  • (Boolean)


284
285
286
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 284

def supports_index_sort_order?
  true
end

#supports_insert_with_returning?Boolean

Returns:

  • (Boolean)


424
425
426
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 424

def supports_insert_with_returning?
  false
end

#supports_migrations?Boolean

Returns true, since this connection adapter supports migrations.

Returns:

  • (Boolean)


415
416
417
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 415

def supports_migrations?
  true
end

#supports_primary_key?Boolean

Does Redshift support finding primary key on non-Active Record tables?

Returns:

  • (Boolean)


420
421
422
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 420

def supports_primary_key? #:nodoc:
  true
end

#supports_savepoints?Boolean

Returns true, since this connection adapter supports savepoints.

Returns:

  • (Boolean)


433
434
435
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 433

def supports_savepoints?
  true
end

#supports_statement_cache?Boolean

Returns true, since this connection adapter supports prepared statement caching.

Returns:

  • (Boolean)


280
281
282
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 280

def supports_statement_cache?
  true
end

#table_alias_lengthObject

Returns the configured supported identifier length supported by Redshift



443
444
445
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 443

def table_alias_length
  @table_alias_length ||= query('SHOW max_identifier_length')[0][0].to_i
end

#table_exists?(name) ⇒ Boolean

Returns true if table exists. If the schema is not specified as part of name then it will only find tables within the current schema search path (regardless of permissions to access tables in other schemas)

Returns:

  • (Boolean)


828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 828

def table_exists?(name)
  schema, table = Utils.extract_schema_and_table(name.to_s)
  return false unless table

  binds = [[nil, table]]
  binds << [nil, schema] if schema

  exec_query(<<-SQL, 'SCHEMA').rows.first[0].to_i > 0
      SELECT COUNT(*)
      FROM pg_class c
      LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
      WHERE c.relkind in ('v','r')
      AND c.relname = '#{table.gsub(/(^"|"$)/,'')}'
      AND n.nspname = #{schema ? "'#{schema}'" : 'ANY (current_schemas(false))'}
  SQL
end

#tables(name = nil) ⇒ Object

Returns the list of all tables in the schema search path or a specified schema.



817
818
819
820
821
822
823
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 817

def tables(name = nil)
  query(<<-SQL, 'SCHEMA').map { |row| "#{row[0]}.#{row[1]}" }
    SELECT schemaname, tablename
    FROM pg_tables
    WHERE schemaname = ANY (current_schemas(false))
  SQL
end

#type_cast(value, column) ⇒ Object



490
491
492
493
494
495
496
497
498
499
500
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 490

def type_cast(value, column)
  return super unless column

  case value
  when String
    return super unless 'bytea' == column.sql_type
    { :value => value, :format => 1 }
  else
    super
  end
end

#type_to_sql(type, limit = nil, precision = nil, scale = nil) ⇒ Object

Maps logical Rails types to Redshift-specific data types.



1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 1069

def type_to_sql(type, limit = nil, precision = nil, scale = nil)
  case type.to_s
  when 'binary'
    # Redshift doesn't support limits on binary (bytea) columns.
    # The hard limit is 1Gb, because of a 32-bit size field, and TOAST.
    case limit
    when nil, 0..0x3fffffff; super(type)
    else raise(ActiveRecordError, "No binary type has byte size #{limit}.")
    end
  when 'integer'
    return 'integer' unless limit

    case limit
      when 1, 2; 'smallint'
      when 3, 4; 'integer'
      when 5..8; 'bigint'
      else raise(ActiveRecordError, "No integer type has byte size #{limit}. Use a numeric with precision 0 instead.")
    end
  else
    super
  end
end

#unescape_bytea(value) ⇒ Object

Unescapes bytea output from a database to the binary string it represents. NOTE: This is NOT an inverse of escape_bytea! This is only to be used

on escaped binary output from database drive.


457
458
459
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 457

def unescape_bytea(value)
  @connection.unescape_bytea(value) if value
end

#update_sql(sql, name = nil) ⇒ Object

Executes an UPDATE query and returns the number of affected tuples.



730
731
732
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 730

def update_sql(sql, name = nil)
  super.cmd_tuples
end

#use_insert_returning?Boolean

Returns:

  • (Boolean)


765
766
767
# File 'lib/active_record/connection_adapters/redshiftbulk_adapter.rb', line 765

def use_insert_returning?
  @use_insert_returning
end