Skip to content

Sub command: Custom

Faisal Ali edited this page Feb 23, 2020 · 9 revisions

Introduction

The custom sub command provides control to the user on how to load the data, the user can generate a file with the plan of how the data will be loaded to the column, the file can then be modified to let the mock tool to know what kind of data needs to be loaded.

Short Hand: The short hand of the schema subcommand is c

Usage

The usage of table subcommand is

[gpadmin@gpdb-m ~]$ mock custom --help
Control the data being written to the tables

Usage:
  mock custom [flags]

Aliases:
  custom, c

Flags:
  -f, --file string         Mock the tables provided in the yaml file
  -h, --help                help for custom
  -t, --table-name string   Provide the table name whose skeleton need to be copied to the file

Global Flags:
  -a, --address string    Hostname where the postgres database lives
  -d, --database string   Database to mock the data (default "gpadmin")
  -q, --dont-prompt       Run without asking for confirmation
  -i, --ignore            Ignore checking and fixing constraints
  -w, --password string   Password for the user to connect to database
  -p, --port int          Port number of the postgres database (default 3000)
  -r, --rows int          Total rows to be faked or mocked (default 10)
  -u, --username string   Username to connect to the database
  -v, --verbose           Enable verbose or debug logging

Example

  • Lets take a example of table that has a check constraint ( for eg.s partition in greenplum database )
  • Now lets build a plan of this table
    mock custom --table-name sales
    -- OR --
    mock c -t sales
    

    NOTE:

    • If the table is not on the default public schema then use mock c -t <schema-name>.<table-name>
    • If you want to generate plan for multiple table then use mock c -t <schema-name1>.<table-name1>,<schema-name2>.<table-name2>...<schema-nameN>.<table-nameN>
  • Once the plan is generated you will received the location and yaml file at the end The YAML is saved to file: <PATH>/<FILENAME> creating-custom-files
  • Edit the file generated
    • On the column you want to take control, set the key Random: false from Random: true
    • Now add array of value you would like to mock data to randomly pick under the Values key, for eg
         Custom:
         - Schema: public
           Table: sales
           Column:
           - Name: id
             Type: integer
             Random: true
             Values: []
           - Name: date
             Type: date
             Random: false
             Values:
              - 2016-01-01
              - 2016-03-01
              - 2016-04-01
           - Name: amt
             Type: numeric(10,2)
             Random: true
             Values: []
      
    • Continue this procedure for the rest of the columns you are interested
  • Using the custom generated plan, feed the yaml to the mock tool
    mock custom --file <filename or path/filename> 
    -- OR --
    mock c -f <filename or path/filename>
    

loading-data-via-custom-file

Clone this wiki locally