Parameter | Value |
---|---|
File Format | Delimited |
Heading (Number of Lines) | 1 |
Record Separator | MS-DOS |
Field Separator | Other |
[Field Separator] Other | ,(comma sign - Hexadecimal 2C) |
Text Delimiter | ' (double quotation marks) |
Decimal Separator | empty, not specified |
,
(comma)'
(double quotation marks)Names on first line
ascii
ODI_IN_XXX
and the output (and no-match) files ODI_OUT_XXX
, where XXX
is the name of the entity.<metabase_name>
) at the location that you specified. This folder contains a projectN sub-folder (where N is the project identifier in Oracle Data Quality). This project folder contains the following folders among others:.DAT
extension. As you specified No data
for the export, this folder is empty..DDX
and .XML
). These files describe the data files' fields. They are prefixed with eNN_
, where NN
is the Entity number. Each entity is described in two metadata files. eNN_<name of the entity>
.ddx
is the description of the entity with possible duplicated columns (suitable for fixed files). enNN_<name of the entity_
csv.ddx
is the description of the entity with non-duplicated columns (suitable for fixed and delimited files). It recommended to use these files for the reverse-engineering process.runprojectN
. This script runs the quality process and is the one that will be triggered by Oracle Data Integrator..ddt
, .sto
, .stt
, .stx
) and the configuration file config_batch.tbl
.runprojectN
) and the configuration file (config_batch.tbl
) in the /batch/settings
sub-folder of your projectN
folder.config_batch.tbl
, specify the location (absolute path) of the directory containing the projectN
folder for the DATABASE
parameter.runprojectN
, specify the location (absolute path) of the projectN directory for the TS_PROJECT
parameter.config_batch.tbl
and runproject2.*
files located in C:oracleoracledqmetabase_ datametabaseoracledqproject2batch
, you should specifyDATABASE = C:oracleoracledqmetabase_ datametabaseoracledqproject2batch
C:oracleoracledqmetabase_ datametabaseoracledqproject2batch
config_batch.tbl
file.::
character at the beginning of the last line)./settings
directory, open with an Editor the settings file corresponding to the first process of your project. This file is typically named eN_transfmr_p1.stx
(where N
is the internal ID of the entity corresponding to the quality input file) if the first process is a transformer.DATA_FILE_NAME
, specify the name and location (absolute path) of your quality input file in run-time.FILE_DELIMITER
, specify the delimiter used in the quality input file.START_RECORD
, specify the line number where data starts. For example, if there is a 1 line header, the value should be 2
.customer_master.csv
quality input file (comma-separated with one header line) located in C:/oracle/oracledq/metabase_data/metabase/oracledq/Data/
, you should edit the following section:/settings
directory, open the file that corresponds to the settings of the process generating the output (cleansed) data. Typically, for a cleansing project which finishes with a Data Reconstructor process, it is named with eNN_datarec_pXX.stx
. Change the following value in the settings file to give the full path of the generated output file..ddx
files located in the /ddl
folder of your data quality project.C:oracleoracledqmetabase_datametabaseoracledqprojectNdata
/ddl
folder.Parameter | Value/Action |
---|---|
Reverse: | Customized |
Context: | Reverse-engineering Context |
Type of objects to reverse-engineer: | Table |
KM | Select the RKM Oracle Data Quality |
Parameter | Default Value | Description |
---|---|---|
DDX_FILE_NAME | *.ddx | Mask for DDX Files to process. If you have used a naming convention in the Quality project for the Entities that you want to use, enter a mask that will return only these Entities. For example, specify the ODI*_csv.ddx mask if you have used the ODI_IN_XX and ODI_OUT_XX naming convention for your input and output entities. |
USE_FRIENDLY_NAMES | No | Set this option to Yes if you want the Reverse-Engineering process to generate user-friendly names for datastore columns based on the field name specified in the DDX file. |
USE_LOG | Yes | Set to Yes if you want the reverse-engineering process activity be logged in a log file. |
LOG_FILE_NAME | /temp/reverse.log | Name of the log file. |
User | Description |
---|---|
root (for UNIX operating systems)Or: winadmin (for Windows operating systems) | Some procedures must be performed by an operating system user with root or super user access. When you enter a root password during the software installation, xinetd.conf (for UNIX operating systems) or inetd.conf (for Windows operating systems) is updated to allow the correct processes to be called between the UNIX system and Oracle Data Quality software.See your operating system documentation for more information on these user types. |
Oracle Data Quality Application Administrator | An operating system user who installs the Oracle Data Quality Server application and administers the Oracle Data Quality Scheduler and License Manager. (This user is required when installing on either Windows operating systems or UNIX operating systems.) For information on creating this user, see Section 2.1.2.1, 'Define an Oracle Data Quality Application Administrator'. |
Oracle Data Quality Loader User | User who will access data import directories (located on the Oracle Data Quality server) through a login screen in the Oracle Data Quality User Interface. This user is not required if you plan to directly access data from relational sources (Oracle, IBM DB2, ODBC). You will, however, need a user id that gives you access to each specific database. NOTE: This user is only required if you plan to load data from flat file sources (delimited, COBOL, Oracle Data Quality sources). For more information on creating this user, see Section 2.1.2.2, 'Define Oracle Data Quality Loader Users'. |
Metabase Administrator | Oracle Data Quality user account that creates and maintains Oracle Data Quality repositories, and defines metabases, Oracle Data Quality users, and data connections. This user is also known as the Oracle Data Quality Repository User. NOTE: The metabase administrator is created during Oracle Data Quality installation. |
Oracle Data Quality User | Oracle Data Quality user account that accesses Oracle Data Quality metabases through the Oracle Data Quality User Interface NOTE: Oracle Data Quality users are created by the metabase administrator after installation. |
sudo
, you must also grant sudo
rights before installing Oracle Date Quality products. This administrator installs the server application and administers the Scheduler and License Manager on the Oracle Data Quality server.sudo
, make sure that the Oracle Data Quality administrator has read access to any data import directories that you define. Proceed to 'Define Oracle Data Quality Loader Users'.sudo
, proceed to the next step.visudo
. This brings up the file named sudoers
for editing.visudo
command.sudo
is correctly configured.id
/data
, then you require an Oracle Data Quality Loader User (UNIX user id) who can log on to the Oracle Data Quality server and read the files from /data
.netstat -an
command. Select two available ports and make note of them for the setup procedure.7600
for the Repository Port and 7601
for the Scheduler Port.)/Disk1
in the ODQ installation directory (where you saved the ODQ .zip or .jar file) and run the following command:http://www.oracle.com/technology/software/products/ias/files/fusion_certification.html
Oracle_Inventory_Location
/log
(on UNIX operating systems) or Oracle_Inventory_Location
logs
(on Windows operating systems) directory. On UNIX systems, if you do not know the location of your Oracle Inventory directory, you can find it in the oraInst.loc
file in the following directories (default locations):/etc/oraInst.loc
/var/opt/oracle/oraInst.loc
Program FilesOracleInventorylogs
No. | Screen | When Does This Screen Appear? | Description and Action Required |
---|---|---|---|
1 | Always | Click Next to continue. | |
2 | Always | This screen analyzes the host computer to ensure that specific operating system prerequisites have been met. If any of the prerequisite checks fail, then a short error message appears in the bottom portion of the screen. Fix the error and click Retry to try again. If you want to ignore the error or warning messages and continue with the installation, click Continue. Click Abort to stop prerequisite checking for all components. | |
3 | Always | In the Location field, enter the Oracle home (referred to in this guide as ODQ_HOME ) where your products will be installed.Click Next to continue. | |
4 | Always | This screen configures the Metabase Server. Provide the required information and click Next to continue. | |
5 | Always | Review the summary and click Install to continue. | |
6 | Always | The installer automatically executes each configuration assistant in sequence, displaying the progress in the Status column. No action is required on this screen. | |
7 | Always | If you want to save this configuration to a text file, click Save. This file can be used later if you choose to perform the same installation from the command line. Click Finish to close the installer. |
No. | Screen | When Does This Screen Appear? | Description and Action Required |
---|---|---|---|
1 | Always | Click Next to continue. | |
2 | Always | Select the components you want to install. The options are:
Click Next to continue. | |
3 | Always | Click Next to continue. | |
4 | Always | Specify the absolute path to your Oracle home (referred to in this guide as ODQ_HOME ).Click Next to continue. | |
5 | Always | This screen configures the Metabase Server. Provide the required information and click Next to continue. | |
6 | Only if you selected Metabase Client on the Select Components Screen (Windows Operating Systems Only) screen. | This screen configures the client to connect to the Metabase and ODBC Servers. Provide the required information and click Next to continue. | |
7 | Always | Review the summary and click Install to continue. | |
8 | Always | The installer automatically executes each configuration assistant in sequence, displaying the progress in the Status column. No action is required on this screen. | |
9 | If you want to save this configuration to a text file, click Save. This file can be used later if you choose to perform the same installation from the command line. Click Finish to close the installer. |
Table or Directory Name | Naming Convention Used |
---|---|
General and Asian Postal Tables | XXMMMq.ext , where XX is the 2-letter country code, MMM is the abbreviation for the month the postal table was issued, and ext is either zip or tar.For example: AUJULq.zip is the Australian postal table for July. |
Global Postal Tables NOTE: Global Postal Tables are a subset of international postal tables that are invoked from within Oracle Data Quality. Global Postal Tables are distinct because they require an additional service. The postal tables in this subset are: Austria, Brazil, Czech Republic, Denmark, Finland, Greece, Hungary, Ireland, Mexico, New Zealand, Norway, Poland, and Sweden. | XXXMMMYY.ext , where XXX is the 3-letter country code, MMMYY represents the abbreviation for the month and year the postal table was issued, and ext is either zip or tar.For example: DENJAN09.zip is the Denmark postal table for January 2009. |
Census Directories NOTE: Census data is available only for the United States. | Need for speed underground 2 mac os x. USCMMMq.ext - This is the name of the United States census directory that includes the Interpolated Rooftop files. MMM is the abbreviation for the month the directory was issued and ext is either zip or tar.USXMMMq.ext - This is the name of the United States census directory that includes the ZIP+4 Centroid files. MMM is the abbreviation for the month the directory was issued and ext is either zip or tar.USPMMMq.ext - This is the name of the file that contains only the Interpolated Plus directory, where MMM is the abbreviation for the month the directory was issued and ext is either zip or tar. |
DPV Directory | DPVMMMq.ext - This is the name of the US Delivery Point Validation directory, where MMM is the abbreviation for the month the directory was issued and ext is either zip or tar. |
LACS DirectoryLink | USLMMMq.ext. This is the name of the United States LACSLink directory, and ext is either zip or tar. |
SuiteLinkDirectory | USLMMMq.ext - This is the name of the United States SuiteLink directory, where MMM is the abbreviation for the month the directory was issued and ext is either zip or tar.NOTE:
|
ODQ_HOME/oracledq/12/tables/postal_tables
ODQ_HOMEoracledqtables*
ODQ_HOME/oracledq/12/tables/postal_tables
ODQ_HOMEoracledqtables*
ODQ_HOME/oracledq/12/tables/postal_tables
ODQ_HOMEoracledqtables*
ODQ_HOME/oracledq/12/tables/postal_tables
ODQ_HOMEoracledqtables*
./mtb_admin
madmin
)._control
metabase name to apply the alternative location to all metabases.census_directory
' found in table 'default settings
'.”xxCITY
files from the default installation directory to the new, alternative directory.mtb_admin
prompt, type:d:newpostal
is the path of the alternative location.exit
to close the command prompt window.ODQ_HOME/oracledq/12/Software/bin
directory).rdata
directory.kbase
directory.gaserver.ini
file.ODQ_HOME/oracledq/12/Software/bin/latlong
directory, overwriting the existing file.gaserver.ini
file must point to the same locations../mtb_admin
_control
metabase..Z
extension, which indicates compression. If necessary, uncompress files by entering the following command:ODQ_HOME/oracledq/12/Software/GA_server
ODQ_HOME/oracledq/12/Software/bin
domain_namejsmith)
.Oracle_QUALITY=ODQ_HOME/oracledq/12/Software
LD_LIBRARY_PATH=ODQ_HOME/oracledq/12/Software/bin
inetd
, a daemon process that handles network services operating on a UNIX operating system. Upon execution, inetd
reads its configuration information from a configuration file which, by default, is /etc/inetd.conf
.inetd.conf
file. For Linux, HP, and AIX systems, no further action is required.inetd.conf
file for any reason, be sure to recycle it.inetd
reads configuration information from a different location. If you have installed Oracle Data Profiling and Quality components on a Solaris 10 system, log on as the root user and issue the following command at the command prompt:inetd.conf
file to the format required by Solaris 10.