IBM Datastage v11.5

Description

Quiz on IBM Datastage v11.5, created by Ricardo Camilo on 07/03/2018.
Ricardo Camilo
Quiz by Ricardo Camilo, updated more than 1 year ago
Ricardo Camilo
Created by Ricardo Camilo about 6 years ago
771
0

Resource summary

Question 1

Question
In your ETL application design you have found several areas of common processing requirements int he mapping specifications.These common logic areas found include : code validation lookups and name formatting.The common logic areas have the same logic areas in your ETL application ?
Answer
  • A. Create parallel routines for each of the common logic areas and for each of the unique column metadata formats.
  • B. Create separate jobs for each layout and choose the appropriate job to run within a job sequencer.
  • C. Create parallel shared containers and define columns combining all data formats.
  • D. Create parallel shared containers with Runtime Column Propagation (RCP) ON and define only necessary common columns needed for thelogic."

Question 2

Question
When optimizing a job, Balanced Optimization will NOT search the job for what pattern?
Answer
  • A. Links
  • B. Stages
  • C. Sequencers
  • D. Property Settings

Question 3

Question
Your job sequence must be restartable. It runs Job1, Job2, and Job3 serially. It has been compiled with "Add checkpoints so sequence is restartable". Job1 must execute every run even after a failure. Which two properties must be selected to ensure that Job1 is run each time, even after a failure? (Choose two.)
Answer
  • "A. Set the Job1 Activity stage to ""Do not checkpoint run"".
  • B. Set trigger on the Job1 Activity stage to ""Unconditional"".
  • C. In the Job1 Activity stage set the Execution action to ""Run"".
  • D. In the Job1 Activity stage set the Execution action to ""Reset if required, then run"".
  • E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger expression type to ""Unconditional"""

Question 4

Question
You would like to pass values into parameters that will be used in a variety of downstream activity stages within a job sequence. What are two valid ways to do this? (Choose two.)
Answer
  • "A. Use local parameters.
  • B. Place a parameter set stage on the job sequence.
  • C. Add a Transformer stage variable to the job sequence canvas.
  • D. Check the ""Propagate Parameters"" checkbox in the Sequence Job properties.
  • E. Use the UserVariablesActivity stage to populate the local parameters from an outside source such as a file."

Question 5

Question
On the DataStage development server, you have been making enhancements to a copy of a DataStage job running on the production server. You have been asked to document the changes you have made to the job. What tool in DataStage Designer would you use?
Answer
  • "A. Compare Against
  • B. diffapicmdline.exe
  • C. DSMakeJobReport
  • D. Cross Project Compare"

Question 6

Question
Your customer is using Source Code Control Integration for Information server and have tagged artifacts for version 1. You must create a deployment package from the version 1. Before you create the package you will have to ensure the project is up to date with version 1. What two things must you do to update the meta-data repository with the artifacts tagged as version 1? (Choose two.)
Answer
  • "A. Right-click the asset and click the Deploy command.
  • B. Right-click the asset and click the Team Import command.
  • C. Right-click the asset and click Update From Source Control Workspace.
  • D. Right-click the asset and click Replace From Source Control Workspace.
  • E. Right-click the asset and click the Team command to update the Source Control Workspace with the asset."

Question 7

Question
What two features distinguish the Operations Console from the Director job log? (Choose two.)
Answer
  • "A. Jobs can be started and stopped in Director, but not in the Operations Console.
  • B. The Operations Console can monitor jobs running on only one DataStage engine.
  • C. Workload management is supported within Director, but not in the Operations Console.
  • D. The Operations Console can monitor jobs running on more than one DataStage engine.
  • E. The Operations Console can run on systems where the DataStage clients are not installed"

Question 8

Question
The Score is divided into which two sections? (Choose two.)
Answer
  • "A. Stages
  • B. File sets
  • C. Schemas
  • D. Data sets
  • E. Operators"

Question 9

Question
A job validates account numbers with a reference file using a Join stage, which is hash partitioned by account number. Runtime monitoring reveals that some partitions process many more rows than others. Assuming adequate hardware resources, which action can be used to improve the performance of the job?
Answer
  • "A. Replace the Join with a Merge stage.
  • B. Change the number of nodes in the configuration file.
  • C. Add a Sort stage in front of the Join stage. Sort by account number.
  • D. Use Round Robin partitioning on the stream and Entire partitioning on the reference."

Question 10

Question
Which option is required to identify a particular job player processes?Which option is required to identify a particular job? player processes?
Answer
  • "A. Set $APT_DUMP_SCORE to true.
  • B. Set $APT_PM_SHOW_PIDS to true.
  • C. Log onto the server and issue the command ""ps -ef | grep ds"".
  • D. Use the DataStage Director Job administration screen to display active player processes."

Question 11

Question
Which two parallel job stages allow you to use partial schemas? (Choose two.)
Answer
  • "A. Peek stage
  • B. File Set stage
  • C. Data Set stage
  • D. Column Export stage
  • E. External Target stage"

Question 12

Question
What are the two Transfer Protocol Transfer Mode property options for the FTP Enterprise stage? (Choose two.)
Answer
  • "A. FTP
  • B. EFTP
  • C. TFTP
  • D. SFTP
  • E. RFTP"

Question 13

Question
Identify the two statements that are true about the functionality of the XML Pack 3.0. (Choose two.)
Answer
  • "A. XML Stages are Plug-in stages.
  • B. XML Stage can be found in the Database folder on the palette.
  • C. Uses a unique custom GUI interface called the Assembly Editor.
  • D. It includes the XML Input, XML Output, and XML Transformer stages.
  • E. A single XML Stage, which can be used as a source, target, or transformation"

Question 14

Question
When using a Sequential File stage as a source what are the two reject mode property options? (Choose two.)
Answer
  • "A. Set
  • B. Fail
  • C. Save
  • D. Convert
  • E. Continue"

Question 15

Question
Which two statements are true about Data Sets? (Choose two.)
Answer
  • "A. Data sets contain ASCII data.
  • B. Data Sets preserve partitioning.
  • C. Data Sets require repartitioning.
  • D. Data Sets represent persistent data.
  • E. Data Sets require import/export conversions"

Question 16

Question
What is the correct method to process a file containing multiple record types using a Complex Flat File stage?
Answer
  • "A. Flatten the record types into a single record type.
  • B. Manually break the file into multiple files by record type.
  • C. Define record definitions on the Constraints tab of the Complex Flat File stage.
  • D. Load a table definition for each record type on the Records tab of the Complex Flat File stage."

Question 17

Question
Which two file stages allow you to configure rejecting data to a reject link? (Choose two.)
Answer
  • "A. Data Set Stage
  • B. Compare Stage
  • C. Big Data File Stage
  • D. Lookup File Set Stage
  • E. Complex Flat File Stage"

Question 18

Question
"A customer must compare a date column with a job parameter date to determine which output links the row belongs on. What stage should be used for this requirement?"
Answer
  • "A. Filter stage
  • B. Switch stage
  • C. Compare stage
  • D. Transformer stage"

Question 19

Question
"Rows of data going into a Transformer stage are sorted and hash partitioned by the Input.Product column. Using stage variables, how can you determine when a new row is the first of a new group of Product rows?"
Answer
  • "A. Create a stage variable named sv_IsNewProduct and follow it by a second stage variable named sv_Product. Map the Input.Product column tosv_Product. The derivation for sv_IsNewProduct is: IF Input.Product = sv_Product THEN ""YES"" ELSE ""NO"".
  • B. Create a stage variable named sv_IsNewProduct and follow it by a second stage variable named sv_Product. Map the Input.Product column tosv_Product. The derivation for sv_IsNewProduct is: IF Input.Product <> sv_Product THEN ""YES"" ELSE ""NO"".
  • C. Create a stage variable named sv_Product and follow it by a second stage variable named sv_IsNewProduct. Map the Input.Product column tosv_Product. The derivation for sv_IsNewProduct is: IF Input.Product = sv_Product THEN ""YES"" ELSE ""NO"".
  • D. Create a stage variable named sv_Product and follow it by a second stage variable named sv_IsNewProduct. Map the Input.Product column tosv_Product. The derivation for sv_IsNewProduct is: IF Input.Product <> sv_Product THEN ""YES"" ELSE ""NO"""

Question 20

Question
Which statement describes what happens when Runtime Column Propagation is disabled for a parallel job?
Answer
  • "A. An input column value flows into a target column only if it matches it by name.
  • B. An input column value flows into a target column only if it is explicitly mapped to it.
  • C. You must set APT_AUTO_MAP project environment to true to allow output link mapping to occur.
  • D. An input column value flows into a target column based on its position in the input row. For example, first column in the input row goes into thefirst target column"

Question 21

Question
Which statement is true when using the SaveInputRecord() function in a Transformer stage.
Answer
  • "A. You can only use the SaveInputRecord() function in Loop variable derivations.
  • B. You can access the saved queue records using Vector referencing in Stage variable derivations.
  • C. You must retrieve all saved queue records using the GetSavedInputRecord() function within Loop variable derivations.
  • D. You must retrieve all saved queue records using the GetSavedInputRecord() function within Stage variable derivations."

Question 22

Question
Which derivations are executed first in the Transformer stage?
Answer
  • "A. Input column derivations
  • B. Loop variable derivations
  • C. Stage variable derivations
  • D. Output column derivations"

Question 23

Question
In a Transformer, which two mappings can be handled by default type conversions. (Choose two.)
Answer
  • "A. Integer input column mapped to raw output column.
  • B. Date input column mapped to a string output column.
  • C. String input column mapped to a date output column.
  • D. String input column mapped to integer output column.
  • E. Integer input column mapped to string output column."

Question 24

Question
Identify two different types of custom stages you can create to extend the Parallel job syntax? (Choose two.)
Answer
  • "A. Input stage
  • B. Basic stage
  • C. Group stage
  • D. Custom stage
  • E. Wrapped stage"

Question 25

Question
What is the purpose of the APT_DUMP_SCORE environment variable?
Answer
  • "A. There is no such environment variable.
  • B. It is an environment variable that turns on the job monitor.
  • C. It is an environment variable that enables the collection of runtime performance statistics.
  • D. It is a reporting environment variable that adds additional runtime information in the job log."

Question 26

Question
Which two data repositories can be used for user authentication within the Information Server Suite? (Choose two.)
Answer
  • "A. IIS Web Console
  • B. IBM Metadata repository
  • C. Standalone LDAP registry
  • D. Operations Console database
  • E. IBM Information Server user directory"

Question 27

Question
Which two statements are true about the use of named node pools? (Choose two.)
Answer
  • "A. Grid environments must have named node pools for data processing.
  • B. Named node pools can allow separation of buffering from sorting disks.
  • C. When named node pools are used, DataStage uses named pipes between stages.
  • D. Named node pools limit the total number of partitions that can be specified in the configuration file.
  • E. Named node pools constraints will limit stages to be executed only on the nodes defined in the node pools"

Question 28

Question
Which step is required to change from a normal lookup to a sparse lookup in an ODBC Connector stage?
Answer
  • "A. Change the partitioning to hash.
  • B. Sort the data on the reference link.
  • C. Change the lookup option in the stage properties to ""Sparse"".
  • D. Replace columns at the beginning of a SELECT statement with a wildcard asterisk (*)."

Question 29

Question
Which two pieces of information are required to be specified for the input link on a Netezza Connector stage? (Choose two.)
Answer
  • "A. Partitioning
  • B. Server name
  • C. Table definitions
  • D. Buffering settings
  • E. Error log directory"

Question 30

Question
Which requirement must be met to read from a database in parallel using the ODBC connector?
Answer
  • "A. ODBC connector always reads in parallel.
  • B. Set the Enable partitioning property to Yes.
  • C. Configure environment variable $APT_PARTITION_COUNT.
  • D. Configure environment variable $APT_MAX_TRANSPORT_BLOCK_SIZE"

Question 31

Question
Configuring the weighting column of an Aggregator stage affects which two options. (Choose two.)
Answer
  • "A. Sum
  • B. Maximum Value
  • C. Average of Weights
  • D. Coefficient of Variation
  • E. Uncorrected Sum of Squares"

Question 32

Question
The parallel framework was extended for real-time applications. Identify two of these aspects. (Choose two.)
Answer
  • "A. XML stage.
  • B. End-of-wave.
  • C. Real-time stage types that re-run jobs.
  • D. Real-time stage types that keep jobs always up and running.
  • E. Support for transactions within source database connector stages."

Question 33

Question
How must the input data set be organized for input into the Join stage? (Choose two.)
Answer
  • "A. Unsorted
  • B. Key partitioned
  • C. Hash partitioned
  • D. Entire partitioned
  • E. Sorted by Join key"

Question 34

Question
"The Change Apply stage produces a change Data Set with a new column representing the code for the type of change. What are two change values identified by these code values? (Choose two.)"
Answer
  • "A. Edit
  • B. Final
  • C. Copy
  • D. Deleted
  • E. Remove Duplicates"

Question 35

Question
What stage allows for more than one reject link?
Answer
  • "A. Join stage
  • B. Merge stage
  • C. Lookup stage
  • D. Funnel stage"

Question 36

Question
Which statement is correct about the Data Rules stage?
Answer
  • "A. The Data Rules stage works with rule definitions only; not executable rules.
  • B. As a best practice, you should create and publish new rules from the Data Rules stage.
  • C. If you have Rule Creator role in InfoSphere Information Analyzer, you can create and publish rule definitions and rule set definitions directlyfrom the stage itself.
  • D. When a job that uses the Data Rules stage runs, the output of the stage is passed to the downstream stages and results are stored in theAnalysis Results database (IADB)."

Question 37

Question
Which job design technique can be used to give unique names to sequential output files that are used in multi-instance jobs?
Answer
  • "A. Use parameters to identify file names.
  • B. Generate unique file names by using a macro.
  • C. Use DSJoblnvocationID to generate a unique filename.
  • D. Use a Transformer stage variable to generate the name"

Question 38

Question
The ODBC stage can handle which two SQL Server data types? (Choose two.)
Answer
  • "A. Date
  • B. Time
  • C. GUID
  • D. Datetime
  • E. SmallDateTime"

Question 39

Question
Which DB2 to InfoSphere DataStage data type conversion is correct when reading data with the DB2 Connector stage?
Answer
  • "A. XML to SQL_WVARCHAR
  • B. BIGINT to SQL_BIGINT (INT32)
  • C. VARCHAR, 32768 to SQL_VARCHAR
  • D. CHAR FOR BIT DATA to SQL_VARBINARY"

Question 40

Question
Which Oracle data type conversion is correct?
Answer
  • "A. Oracle data type RAW converts to RAW in Oracle Connector stage.
  • B. Oracle data type NUMBER(6,0) converts to INT32 in Oracle Connector stage.
  • C. Oracle data type NUMBER(15,0) converts to INT32 in Oracle Connector stage.
  • D. Oracle data type NUMBER converts to DECIMAL(38,0) in Oracle Connector stage"

Question 41

Question
Which two statements about using a Load write method in an Oracle Connector stage to tables that have indexes on them are true? (Choose two.)
Answer
  • "A. Set the Upsert mode property to ""Index"".
  • B. Set the Index mode property to ""Bypass"".
  • C. The Load Write method uses the Parallel Direct Path load method.
  • D. The Load Write method uses ""Rebuild"" mode with no logging automatically.
  • E. Set the environment variable APT_ORACLE_LOAD_OPTIONS to ""OPTIONS (DIRECT=TRUE, PARALLEL=FALSE)"""

Question 42

Question
Which Oracle Connector stage property can be set to tune job performance?
Answer
  • "A. Array size
  • B. Memory size
  • C. Partition size
  • D. Transaction size"

Question 43

Question
Identify two different types of custom stages you can create to extend the Parallel job syntax? (Choose two.)
Answer
  • "A. Input stage
  • B. Basic stage
  • C. Group stage
  • D. Custom stage
  • E. Wrapped stage"

Question 44

Question
When using the loop functionality in a transformer, which statement is true regarding Transformer processing
Answer
  • "A. Stage variables can be referenced in loop conditions.
  • B. Stage variables can be executed after loop variable expressions.
  • C. Loop variable expressions are executed before input link column expressions.
  • D. Output links can be excluded from being associated with a True loop condition."

Question 45

Question
Which stage classifies data rows from a single input into groups and computes totals?
Answer
  • "A. Modify stage
  • B. Compare stage
  • C. Aggregator stage
  • D. Transformer stage"

Question 46

Question
Which statement describes a SCD Type One update in the Slowly Changing Dimension stage?
Answer
  • "A. Adds a new row to the fact table.
  • B. Adds a new row to a dimension table.
  • C. Overwrites an attribute in the fact table.
  • D. Overwrites an attribute in a dimension table."

Question 47

Question
Which derivations are executed last in the Transformer stage?
Answer
  • "A. Input column derivations
  • B. Loop variable derivations
  • C. Output column derivations
  • D. Stage variable derivations"

Question 48

Question
"The derivation for a stage variable is: Upcase(input_column1) : ' ' : Upcase(input_column2). Suppose that input_column1 contains a NULL value. Assume the legacy NULL processing option is turned off. Which behavior is expected?"
Answer
  • "A. The job aborts.
  • B. NULL is written to the target stage variable.
  • C. The input row is either dropped or rejected depending on whether the Transformer has a reject link.
  • D. The target stage variable is populated with spaces or zeros depending on the stage variable data type."

Question 49

Question
Which statement is true about table definitions created in DataStage Designer?
Answer
  • "A. By default, table definitions created in DataStage Designer are visible to other Information Server products.
  • B. Table definitions created in DataStage Designer are local to DataStage and cannot be shared with other Information Server products.
  • C. When a table definition is created in one DataStage project, it is automatically available in other DataStage projects, but not outside ofDataStage.
  • D. Table definitions created in DataStage Designer are not by default available to other Information Server products, but they can be shared withother Information Server products. "

Question 50

Question
What are two advantages of using Runtime Column Propagation (RCP)? (Choose two.)
Answer
  • "A. RCP forces a developer to define all columns explicitly.
  • B. Only columns used in the data flow need to be defined.
  • C. Sequential files don't require schema files when using RCP.
  • D. Only columns that are defined as VarChar need RCP enabled.
  • E. Columns not specifically used in the flow are propagated as if they were. "
Show full summary Hide full summary

Similar

Fundamentals of HTML e CSS -> [Codecademy]
Wendell Paulino
RESOLUÇÃO - RDC Nº 50, DE 21 DE FEVEREIRO DE 2002
Alessandra Campos
Spanish: Talking About Everyday Things
Niat Habtemariam
10 Study Techniques
PatrickNoonan
A Level: English language and literature techniques = Structure
Jessica 'JessieB
Psychology flashcards memory
eharveyhudl
GCSE AQA Biology - Unit 2
James Jolliffe
Mapa Mental para Resumir y Conectar Ideas
Marko Salazar
Using GoConqr to study Art
Sarah Egan
1.11 Core Textiles
T Andrews