Question 1
Question
1 de 80
Basic Data Transformation
From the account table you want to know how many accounts you have per account type, hence you output the ACCOUNT_TYPE, which is used in the GROUP BY tab as well, and an additional COUNTER column
Which mapping would you use for the COUNTER column? (1)
Question 2
Question
7 de 80
Basic Data Transformation
Which join types are supported in the Query Editor? (3)
Answer
-
Right outer join
-
Cross-product join
-
Full outer join
-
Left outer join
-
Inner join
Question 3
Question
13 de 80 ok
Basic Data Transformation
You executed a job in a development environment and it raised primary key violation errors.
Which option do you use to find which primary key values caused the errors? (1)
Question 4
Question
14 de 80 ok
Basic Data Transformation
Which of the following can be performed in the Query Transform? (3)
Answer
-
Separate data into distinct output sets
-
Join data from multiple sources
-
Assign a value to a global variable
-
Change column datatypes
-
Change the primary key flag of columns
Question 5
Question
20 de 80 (D)
Basic Data Transformation
Which objects can you use to define an output schema in a Query transform? (3)
Question 6
Question
35 de 80
Basic Data Transformation
What is the expected result of the expression quarter(to_date(‘Feb 22, 2013’,’mon dd, yyyy’))? (1)
Question 7
Question
39 de 80
Basic Data Transformation
In the source-target mapping document, a Target column is specified to contain a concatenation of a First Name column with a Last Name column from the source table.
Where in the Query Editor can you accomplish this task? (1)
Answer
-
SELECT
-
GROUP BY
-
ORDER BY
-
MAPPING
Question 8
Question
48 de 80
Basic Data Transformation
You have just added a new template table to a data flow. You validate and save the data flow and its parent job.
What happens during the save? (1)
Answer
-
The metadata for the template table is added to the local repository
-
The job server verifies the existence of the template table
-
The job server attempts to (re)create the table
-
The designer attempts to (re)create the table
Question 9
Question
54 de 80 (D)
Basic Data Transformation
You are part of a development team on a data integration project and have been instructed to use the error_number() function to identify errors while running the job.
Where do you place the error_number() function? (1)
Answer
-
Validation transform
-
Script at job level
-
Query transform
-
Script inside a catch
Question 10
Question
59 de 80
Basic Data Transformation
Which options are available when using the Include in Transaction target table option? (2)
Answer
-
Transaction order
-
Auto correct load
-
Rows per commit
-
Use overflow file
Question 11
Question
61 de 80
Basic Data Transformation
In which cases do you recommend using embedded data flows? (2)
Answer
-
To optimize the performance of the data flow
-
To make the layout more readable by grouping sections of the data flow
-
To reuse the extraction logic of the existing data flow in another data flow
-
To run data flows in parallel inside the parent data flow
Question 12
Question
76 de 80
Basic Data Transformation
You have multiple output schemas in a Query transform. All columns in the schema you want to edit are greyed-out.
Which action do you perform in order to change the mapping of a column inside this schema? (3)
Answer
-
Double-click the column in the output schema area
-
Select the schema from the Output list
-
Right-click the parent schema and select the Nest with Sub-Schemas option
-
Right-click the schema, column or function and select Make Current
-
Right-click the parent schema and select the Unnest with Sub-Schemas option
Question 13
Question
79 de 80
Basic Data Transformation
Which function will deliver the same results as nested IFTHENELSE functions? (1)
Answer
-
Literal
-
Match_pattern
-
Word_ext
-
Decode
Question 14
Question
6 de 80
Advanced Data Transformation
Your source table has a revenue column and a quantity column for each month (so 12 columns for revenue and 12 columns for quantity). You want to pivot the data to get 12 rows with 2 columns.
What is the best way to achieve this? (1)
Answer
-
Use a Query transform with multiple IFTHENELSE() functions
-
Use the Pivot transform with two pivot sets
-
Use 12 Query transforms to create the desired output, which you the merge
-
Use the Pivot transform with one pivot set
Question 15
Question
12 de 80
Advanced Data Transformation
Which transforms can be used as a source in a dataflow? (2)
Answer
-
Effective Date
-
SQL
-
Row Generation
-
Hierarchy Flattening
Question 16
Question
29 de 80
Advanced Data Transformation
What benefits does the Validation transform give you? (3)
Answer
-
You can see all violated rules at once
-
You can call a recovery dataflow
-
You can have multiple rules on a single column
-
You can produce statistics
-
You can interrupt the dataflow once a certain number of violations occur
Question 17
Question
31 de 80
Advanced Data Transformation
You have a Map Operation transform immediately before the target in a data flow.
What happens if all operation codes are mapped to Discard in the transform? (1)
Answer
-
They are flagged for later deletion
-
They are deleted from the target
-
They are added to the overflow file
-
They are filtered by the transform
Question 18
Question
38 de 80
Advanced Data Transformation
In a Slowly Changing Dimension Type 2 dataflow, all INSERT and UPDATE rows are passed through the Key_Generation transform.
What does the Key_Generation transform do with UPDATE rows? (1)
Answer
-
It retains the current data
-
It only creates a new key value when the input value is NULL
-
It increases the key value by 1
-
It always creates a new key value
Question 19
Question
43 de 80
Advanced Data Transformation
You have used the Table Comparison transform to establish the differences between two tables. In the subsequent History Preserving transform, all columns are listed as compare columns (neither the Valid from/to nor Current flag is used).
What will the History Preserving transform do with new rows or if any of the columns have been changed? (2)
Answer
-
It will keep an INSERT as an INSERT
-
It will change an UPDATE to an INSERT
-
It will output an UPDATE as two rows with INSERT and UPDATE
-
It will keep an UPDATE as an UPDATE
Question 20
Question
52 de 80
Advanced Data Transformation
What are the source schema requirements when using the Hierarchy Flattening transform? (1)
Answer
-
Each row contains columns that function as the keys in a parent-child relationship
-
Rows with NULL values in the child column have been removed or updated
-
Rows with NULL values in the parent column have been removed or updated
-
Each row has an operation code of either UPDATE or INSERT
Question 21
Question
60 de 80
Advanced Data Transformation
In which cases do you use the Text_Data_Processing transform? (3)
Answer
-
To cleanse material master descriptions
-
To analyze relationships in a sentence
-
To spellcheck free-form text
-
To extract keywords from a free-form text
-
To evaluate the sentiment (for example, strong positive or weak negative) of a free-form text
Question 22
Question
67 de 80
Advanced Data Transformation
Your source vendor data needs to be loaded to one of three target tables, depending on the country code.
Which transform do you use to achieve this? (1)
Answer
-
Validation transform
-
Map_Operation transform
-
Merge transform
-
Case transform
Question 23
Question
70 de 80
Advanced Data Transformation
You have a Validation transform with two rules.
Rule #1: Action on Failure is “Send to Pass”
Rule #2: Action on Failure is “Send to Fail”
Where are the records that fail both Rule #1 and Rule #2 sent? (1)
Question 24
Question
73 de 80
Advanced Data Transformation
Why would you use a validation function instead of the Column Validation option in the Validation transform? (2)
Answer
-
To write custom messages to DI_ERRORCOLUMMS in the fail output schema
-
To capture validation statistics for later reporting purposes
-
To use a function that was created in SAP Information Steward
-
To allow the validation rule to be used in multiple data flows
Question 25
Question
75 de 80
Advanced Data Transformation
Why would you use the Map_CDC_Operation transform? (1)
Answer
-
To identify changed rows in a relational database
-
To split insert, update and delete flagged rows into three output streams
-
To consume a source that contains the information if a record was inserted, updated or deleted
-
To further process rows that have been identified as changes by the Table_Comparison transform
Question 26
Question
78 de 80
Advanced Data Transformation
When would you check the Produce default output option in the Case transform? (1)
Answer
-
-To output all incoming rows by this default path but the flag Row can be true for one case only has to be turned off
-
-To output all incoming rows by this default path
-
-To output rows that do not match one condition
-
-To output all rows that do not match any case expression by this default path
Question 27
Question
2 de 80
Data Integration Concepts
In which situation do you use the Data Services interactive debugger? (1)
Answer
-
You need to debug a script object
-
You need to view the data output from the dataflow
-
You need to establish why a small group of records is wrong
-
You need to resolve syntax validation errors in a dataflow
Question 28
Question
3 de 80
Data Integration Concepts
Why would you use additional effort to build a job that can be restarted automatically? (2)
Answer
-
It shows time consuming jobs to be restarted mid process
-
It reduces the daily manual work for the operator when errors occur
-
It enables debugging
-
It improves the job’s performance
Question 29
Question
15 de 80
Data Integration Concepts
What can you control with Data Services system configurations? (2)
Answer
-
Substitution parameters
-
Global variables
-
Job server used
-
Datastore configurations
Question 30
Question
17 de 80
Data Integration Concepts
What benefits does the central repository bring to a development environment? (2)
Answer
-
It removes the need for developers to have their own repository
-
It supports multiple versions of Data Services
-
It can control which objects developers are allowed to edit
-
It retains a history of objects checked in
Question 31
Question
30 de 80
Data Integration Concepts
A job you are designing requires you to extract source data that contains sales revenue facts for different regions, and separate the records for the regions into distinct output tables.
Which transform will allow you to accomplish this task? (1)
Answer
-
Country_ID
-
Pivot
-
Case
-
Map_Operation
Question 32
Question
33 de 80
Data Integration Concepts
You are part of a development team that is extracting data from external systems using flat files. You must combine all incoming address data sets and produce a single output data set using the Merge transform.
Which conditions must be met to successfully run the Merge transform? (3)
Answer
-
All nested schemas from the input sets must be consistent
-
All sources have the same column names
-
All sources must have the same column data types
-
All hierarchical data must have matching names and data types at the highest level only
-
All duplicates have been removed from source files
Question 33
Question
37 de 80
Data Integration Concepts
When using Data Services scripting language, which syntax must be followed when creating an expression? (2)
Answer
-
Each statement ends in a semicolon (;)
-
Variable names start with a dollar sign ($)
-
Values of an expression are qualified by parentheses ()
-
Functions call parameters within square brackets [ ]
Question 34
Question
46 de 80
Data Integration Concepts
When analyzing your customer’s data, you discover a data entry error in the source system. The contact title “Director” has been entered as “Vice President”.
Which function should be used to correct these entries? (1)
Answer
-
Word_ext
-
Ltrim
-
Search_Replace
-
Match_pattern
Question 35
Question
57 de 80
Data Integration Concepts
In which of the following objects can built-in functions be used? (3)
Answer
-
Query transform
-
Merge transform
-
Map CDC transform
-
Conditionals
-
Scripts
Question 36
Question
63 de 80
Data Integration Concepts
What are the advantages of using the lookup_ext() function as a new function call compared to using it in a mapping? (2)
Answer
-
You can have multiple input conditions
-
You can get better performance
-
You can have multiple return columns
-
You can specify a cache setting
Question 37
Question
64 de 80
Data Integration Concepts
You are reading sales order data from a file. The file contains order numbers, but each one can have multiple rows (see screenshot below).
How can you populate an additional column with the order line number identifying each line item per order? (1)
Answer
-
Use a gen_row_num_per_group(ORDERNUMBER) function on unsorted data
-
Use the count_distinct(ORDERNUMBER) function on the data sorted by ORDERNUMBER
-
Use a gen_row_num_per_group(ORDERNUMBER) function on the data sorted by ORDERNUMBER
-
Use the count(*) function on unsorted data
Question 38
Question
66 de 80
Data Integration Concepts
Which objects can you use to read data from SAP Netweaver Business Warehouse? (2)
Answer
-
XML file
-
Open hub
-
File format
-
Tables
Question 39
Question
71 de 80
Data Integration Concepts
Which of the following use cases are common for scripts? (2)
Question 40
Question
4 de 80
Complex Design Methodology
A new team member wants to browse the content of an existing local repository
How do you allow them to view the repository without making changes? (2)
Answer
-
-Copy the repository and instruct the team member to view the copy in the repository manager
-
-Export the repository’s metadata to an XML file and instruct the team member to open it in SAP Information Steward
-
-Assign the team member “View” access level to the repository in the Central Management Console and instruct them to view it from the Designer
-
-Demonstrate to the team member how to view the repository using the Auto Documentation feature in the Management Console
Question 41
Question
24 de 80
Complex Design Methodology
What file formats are available when exporting Data Services objects? (2)
Answer
-
An XML file
-
A CSV file
-
An XSD file
-
An ATL file
Question 42
Question
27 de 80 (D)
Complex Design Methodology
You have two workflows. The second workflow should only run if the first one was successful.
How can you achieve this? (1)
Answer
-
You embed the first workflow in a try-catch
-
You connect the two workflows with a line
-
You add a script between the workflows using the error_number() function
-
You use a conditional for the second workflow
Question 43
Question
32 de 80
Complex Design Methodology
Which of the following are correct sequences of embedding objects within objects? (3)
Answer
-
Job -> Dataflow -> Table Reader Object -> Dataflow
-
Project -> Job -> Workflow -> Dataflow -> Table Reader Object
-
Project -> Job -> Workflow -> Workflow -> Dataflow -> Dataflow
-
Project -> Job -> Script -> Table Reader Object
-
Project -> Job -> Dataflow -> Table Reader Object
Question 44
Question
34 de 80
Complex Design Methodology
You have the requirement to build a process that handles both an initial load and a delta load. For most targets, the initial load and the delta load can use the same dataflow logic.
How does SAP recommend that you design the job(s)? (1)
Answer
-
-One job is used for the initial load, and another job is used for delta load. The delta dataflows are copied from the initial load dataflows where necessary
-
-One job is used for the initial load, and another job is used for delta load. Both call the same objects where required
-
-One job is used for both initial and delta load. A conditional object is used to start the initial or delta dataflow where necessary
-
-One job is used for initial load and delta load. You control the dependencies by using scripting
Question 45
Question
36 de 80
Complex Design Methodology
You want to display an object’s description in the Designer workspace
What tasks must be performed to accomplish this? (3)
Answer
-
Click the View Enabled Descriptions button on the toolbar
-
Enter a description in the properties of the object
-
Disable Hide Non-Executable Elements in the difference viewer
-
Right-click the object, then choose Enable Description
-
Right-click the object in the local object library, then choose View Where Used
Question 46
Question
41 de 80
Complex Design Methodology
You have three workflows:
• Customer
• Salesoffice
• Geography
The Salesoffice and Customer workflows have a dependency that the Geography workflow is loaded first.
How does SAP recommend dealing with the dependency? (1)
Answer
-
-Connect the Geography workflow to the Customer workflow and the Salesoffice workflow
-
-Connect the Geography workflow to the Customer workflow, which in turn is connected to the Salesoffice workflow
-
-Have the Geography workflow as first object in the Customer workflow and Salesoffice workflow, and set them all to Execute only once
-
-Have two jobs: The first one calls the Geography and the second one calls the other workflows
Question 47
Question
45 de 80
Complex Design Methodology
The design of a complex job requires that a dataflow occur in multiple workflows. This dataflow should run –at most- one time in a single execution of the job.
What would you do to prevent the dataflow from running multiple times? (1)
Answer
-
Set the Number of Loaders on the target in the data flow to 1
-
Enable the Execute Only Once option in the dataflow properties
-
Set the Degree of Parallelism on the data flow to 1
-
Enable the Execute Only Once option in the properties of each of the workflows
Question 48
Question
53 de 80
Complex Design Methodology
Your business user has given you the following source-to-target mapping document.
What issues do you need to clarify with the business user? (1)
Answer
-
The data type in the price column in the target should be changed to decimal(4,2)
-
The to_date function will be used for the transformation of the ORDER_DATE column
-
The transformation of the B_CODE column will be performed using regular expressions
-
The N_DAT column will have some values truncated in the target table
Question 49
Question
56 de 80
Complex Design Methodology
How does a workflow differ from a job? (1)
Answer
-
A workflow can only call embedded dataflows
-
In recovery mode, only workflows with “recover as a unit” enabled will be executed
-
A workflow has its own global variables
-
A workflow must be called by a job in order to execute
Question 50
Question
74 de 80
Complex Design Methodology
Which of the following components are required to run batch jobs in Data Services? (2)
Answer
-
Local repository
-
Job server
-
Designer
-
Central repository
Question 51
Question
5 de 80
Performance Optimized Design
You need to initial load a target table with data from an extremely large source table in another database. The only mappings are columns being renamed. The source has been set up to allow for maximum throughput, for example, the source table is partitioned.
Which option should you check before the first execution of the dataflow? (3)
Answer
-
-Check if the Degree Of Parallelism property of the dataflow object is set to the same number of CPU cores as the DataServices server
-
-Check the Degree Of Parallelism property of the dataflow object and set it to the same number as the number of partitions in the source table
-
-Check that the target table options Enable Partitions, Number of Loaders and Enable Bulkloader are aligned to the database you are using
-
-In the source table options, check if the Enable Partitions flag is available and turned on
-
-Check if the bulkloader is turned on, although the number of Loaders and Enable Partitions options are greyed out
Question 52
Question
10 de 80 (D)
Performance Optimized Design
A dataflow contains a Pivot transform, which is followed by a Query transform that performs an aggregation.
How do you push down the aggregation query to the database? (1)
Answer
-
-The Data_Transfer transform must be used twice, ahead of the Pivot transform and after the Query transform
-
-The Data_Transfer transform must go ahead of the Query transform
-
-The Data_Transfer transform must follow the Query transform
-
-The Data_Transfer transform must go ahead of the Pivot transform
Question 53
Question
16 de 80 (D)
Performance Optimized Design
Your dataflow reads a subset of data (where ORDER_TYPE=’A’) from a relational source database and loads another target database. The only mappings you are using are columns being renamed and the functions trim_blanks() and substring().
To isolate the performance bottleneck, what test would you perform first? (1)
Answer
-
-Add a Map_Operation discarding all rows between the query and the target table
-
-Add a Map_Operation discarding all rows between the source table and the query transform with the where clause and the functions
-
-Remove the functions trim_blanks() and substring() from the query mappings
-
-Use the Display Optimized SQL menu item and copy the shown SQL statement into a database query tool
Question 54
Question
18 de 80
Performance Optimized Design
You have a source table and lookup table in the same database.
Which is the fastest method to retrieve a lookup value from a table based on a primary key? (1)
Answer
-
Use the sql() function in the query mapping
-
Push down the lookup as a join into the source database
-
In the database, write a stored procedure to return the row
-
Use the lookup_ext function with pre_load_cache
Question 55
Question
21 de 80
Performance Optimized Design
Your customer is using the lookup function and needs to improve performance.
Under what conditions should they use the Demand Load Cache option? (3)
Answer
-
All rows of the lookup table will eventually be needed
-
Lookup rows are used more than once
-
Columns referenced in the lookup condition are indexed
-
Optimization statistics have been collected for the job
-
The lookup table is too large to fit in cache
Question 56
Question
23 de 80
Performance Optimized Design
After modifying an existing job, you notice that the run time is longer than expected.
Where in the Management Console can you observe the progress of row counts to determine the location of a bottleneck? (1)
Answer
-
Use the Administrator web page to access the Trace log
-
Use the Impact and Lineage Analysis option to access the Trace log
-
Use the Administrator web page to access the Monitor log
-
Use the Impact and Lineage Analysis option to access the Monitor log
Question 57
Question
44 de 80
Performance Optimized Design
To improve load balancing, you decide to distribute the execution of a job across multiple job servers within a server group.
What objects can be distributed across the server group? (3)
Answer
-
Embedded dataflow
-
Sub-dataflow
-
Job
-
Workflow
-
Dataflow
Question 58
Question
55 de 80
Performance Optimized Design
You are working on a dataflow where performance is critical. You use the Display Optimized SQL menu item and expected to see an insert…select statement, but the dataflow is not optimized as full pushdown.
What could have caused this? (3)
Answer
-
The dataflow produces a warning saying “varchar(20) converted to varchar(5)”
-
The dataflow has two query transforms, although they are simple
-
The dataflow has two target tables
-
The dataflow has the Bulkloader option enabled
-
The dataflow is using a join clause containing a greater than condition
Question 59
Question
62 de 80 (D)
Performance Optimized Design
After a batch job executes, you view the Monitor log for execution statistics. For one of the tasks, you observe that the Elapsed Time column is 0.0000 (zero), while Actual Time for the task is greater than zero.
What might explain this outcome? (1)
Answer
-
The task was performed on the access server
-
The task was pushed down to the database server
-
An exception was raised in the job prior to the task
-
The task was performed using data cached in job server memory
Question 60
Question
65 de 80
Performance Optimized Design
Which transforms execute SQL statements themselves? (2)
Answer
-
Map CDC Operation transform
-
Key Generation transform
-
History Preserving transform
-
Table Comparison transform
Question 61
Question
80 de 80
Performance Optimized Design
Which factor do you need to consider when setting the degree of parallelism? (1)
Answer
-
The number of access servers available
-
The number of CPUs available on the server
-
The number of source tables used in the dataflow
-
The amount of disk space available on the server
Question 62
Question
19 de 80 (D)
Recovery and Troubleshooting
You have built a job using many workflows and dataflows.
What is the SAP-recommended way to execute a single workflow for testing? (1)
Answer
-
Copy and paste each of the objects within the workflow into a test job
-
Execute the job in debug mode and skip the other objects
-
Use conditionals for each workflow and a complex control logic
-
Drag the workflow object from the object library into a test job
Question 63
Question
25 de 80
Recovery and Troubleshooting
You created a job containing a number of workflows. You want to implement a solution that allows you to restart loading the job from a point of failure.
What do you need to do before starting the job? (1)
Answer
-
-Turn on the Enable Recovery flag in the execution properties
-
-Turn on the Recover as a Unit flag in the workflow properties
-
-Turn on the Recover from Last Failed Execution flag in the execution properties
-
-Turn on the Enable Recovery flag and the Recover from Last Failed Execution flag in the execution properties
Question 64
Question
42 de 80 (D)
Recovery and Troubleshooting
You execute a job with Enable Recovery activated and one of the data flows in the job raises an exception, interrupting execution. You run the job again with Recover from Last Failed Execution enabled.
What happens to the data flow that raised the exception during the first execution? (1)
Answer
-
It is rerun only if the dataflow is part of a recovery unit
-
It is rerun from the beginning and the design of the data flow must deal with partially loaded data
-
It is rerun from the beginning and the partially loaded data is always handled automatically
-
It is rerun starting with the first failed row
Question 65
Question
50 de 80
Recovery and Troubleshooting
Where can you set up breakpoints for the interactive debugger? (1)
Answer
-
In a workflow
-
In a script
-
In a job
-
In a dataflow
Question 66
Question
51 de 80
Recovery and Troubleshooting
A job is failing with a primary key constraint violation error during testing.
How can you run the job successfully and also capture the rows that are failing? (1)
Answer
-
Put the data flow in a try-catch block and use the error_message function
-
Set the Auto Correct Load option to Yes in the target table editor
-
Enable the Distinct Rows option in the Query transform
-
Set the Use Overflow File option to Yes in the target table editor
Question 67
Question
68 de 80
Recovery and Troubleshooting
Your job design includes an initialization script that truncates rows in the target prior to loading. The job will make use of automatic recovery.
Which of the following behaviours can you expect when you run the job in recovery mode? (2)
Answer
-
-The job will rerun all workflows and scripts. Only data flows that ran successfully in the previous execution in skipped when using automatic recovery
-
-The job will start with the flow that caused the error. If this is after the initialization script, the initialization script will be skipped
-
-The job will execute the script if it is part of a workflow marked as a recovery unit, but only if an error was raised within that workflow
-
-The job will execute the script if it is part of a workflow marked as a recovery unit, no matter where in the job’s flow the error occurred
Question 68
Question
72 de 80 (D)
Recovery and Troubleshooting
What is the consequence of using the loader option Include in Transaction? (1)
Answer
-
It requires a single database connection and might slow down performance
-
It requires all data to be staged in a temporary database table
-
It requires the dataflow to have a single target database
-
It requires a script object after the dataflow to commit the data
Question 69
Question
77 de 80
Recovery and Troubleshooting
Which of the following cause conversion warnings? (2)
Answer
-
varchar(10) converted to varchar(5)
-
int to decimal(20,3)
-
varchar(5) converted to varchar(10)
-
to_date(sysdate(),’YYYY.MM.DD’))
Question 70
Question
9 de 80
Implement Change Data Capture
You have a delta dataflow that is loading a large target table containing historical data. The table contains many indexed columns. The dataflow uses the Key_Generation transform to populate a surrogate key column in the target table. A typical delta run will update 80% of the target table.
How can you improve the performance of the delta load? (1)
Answer
-
-Truncate the target table before loading
-
-Set the target table option Include in Transaction to Yes
-
-Replace the Key_Generation transform with a Query transform using the gen_row_num function
-
-Before loading, drop the indexes except the primary key on the target and recreate the indexes after loading
Question 71
Question
11 de 80
Implement Change Data Capture
What are all the available options you have to identify changed rows in the source database with Data Services? (1)
Question 72
Question
26 de 80 (D)
Implement Change Data Capture
You want to load a target database that is queried day and night by users worldwide.
How do you design your job to have the least impact on users who are currently querying the database? (1)
Answer
-
Use the option to truncate the target table and bulkload all data quickly
-
Use the Table Comparison transform in all dataflows
-
Use the sql() function to lock all tables at the beginning of the job
-
Use the Include in Transaction option in all dataflows and commit once at the end of the job
Question 73
Question
28 de 80
Implement Change Data Capture
Your target database has a dimension table with 100 columns and 50,000,000 rows. More than half of the records change each day. You are designing a job to capture changes to this table without preserving history.
Which method is the fastest? (1)
Answer
-
Use the Delete Data from Table Before Loading option
-
Use Table Comparison transform
-
Use Bulk Load with truncate option
-
Use the Auto-Correct Load option
Question 74
Question
40 de 80
Implement Change Data Capture
How can you implement a target-based delta that deals with inserts, updates and deletes in the source? (1)
Answer
-
Use Auto correct load
-
Use a Map_Operation transform
-
Use a Map_CDC_Operation transform
-
Use a Table Comparison transform
Question 75
Question
47 de 80
Implement Change Data Capture
Which transforms are typically used to implement a Slowly Changing Dimension Type 2? (3)
Answer
-
Data_Transfer
-
History_Preserving
-
Map_CDC_Operation
-
Key_Generation
-
Table_Comparison
Question 76
Question
58 de 80
Implement Change Data Capture
Which transforms can be connected directly to the Table Comparison transform’s output? (1)
Answer
-
History_Preserving
-
Key_Generation
-
Map_Operation
-
Query
Question 77
Question
69 de 80
Implement Change Data Capture
In which situation is it appropriate to use an update timestamp column to capture changes in source data? (1)
Answer
-
You need to capture intermediate changes
-
You need to capture physical deletes from source
-
Almost all of the rows have changes
-
There is an index on the timestamp column
Question 78
Question
8 de 80
Data Management
You have previously created and saved a database datastore.
Which properties can you change in the Edit Datastore dialog box? (3)
Answer
-
Database name
-
User name and password
-
Database server name
-
Datastore name
-
Database version
Question 79
Question
22 de 80
Data Management
As part of the project team for a data migration engagement, you have been assigned to help create the source-to-target mapping document.
What is the main information that needs to be captured during the process? (2)
Data flow mapping
Answer
-
Data flow mapping
-
Business rules
-
User rules
-
Data Security
Question 80
Question
49 de 80
Data Management
What is the relationship between datastore configurations and system configurations? (1)
Answer
-
-A datastore configuration can only reference one system configuration
-
-System configurations and datastore configurations are exported together as one file
-
-Substitution parameters link system configurations to datastore configurations
-
-A system configuration consists of exactly one datastore configuration for each datastore in the repository