ThinkChat2.0新版上线,更智能更精彩,支持会话、画图、阅读、搜索等,送10W Token,即刻开启你的AI之旅 广告
# Informatica 面试的前 50 个问题&答案 > 原文: [https://www.guru99.com/informatica-interview-questions.html](https://www.guru99.com/informatica-interview-questions.html) **1.企业数据仓库是什么意思?** 当在单个访问点创建组织数据时,称为企业数据仓库。 数据可以通过单个源存储以全局视图的形式提供给服务器。 可以对同一来源进行定期分析。 它可以提供更好的结果,但是所需的时间却很长。 **2.数据库,数据仓库和数据集市之间有什么区别?** 数据库包括一组明智的关联数据,与数据仓库相比,这些数据通常较小。 在数据仓库中,各种各样的数据种类繁多,仅根据客户的需求取出数据。 另一方面,数据集市也是一组旨在满足不同域需求的数据。 例如,一个组织在其不同部门(即销售,财务,市场营销等)拥有不同的数据块。 **3.域是什么意思?** 当所有相关关系和节点都被一个唯一的组织点覆盖时,则称为其域。 通过这种数据管理可以得到改善。 **4.存储库服务器和 Powerhouse 有什么区别?** 储存库服务器控制着完整的储存库,其中包括表格,图表和各种过程等。其主要功能是确保储存库的完整性和一致性。 而强大的服务器则控制着服务器数据库存储库中各种过程的执行。 **5.在 informatica 中可以创建多少个存储库?** informatica 中可以有任意数量的存储库,但最终取决于端口数量。 **6.分区会话有什么好处?** 分区会话意味着会话中的单独实现序列。 其主要目的是提高服务器的运行效率。 并行执行其他转换,包括单个分区的提取和其他输出。 **7.完成加载过程后如何创建索引?** 为了在加载过程之后创建索引,可以使用会话级别的命令任务。 索引创建脚本可以与会话的工作流程或会话后的实现顺序保持一致。 而且,在转换级别的加载过程之后,无法控制这种类型的索引创建。 **8.解释会话。 说明如何使用批处理组合执行?** 需要实现将数据从源转换为目标的教学集称为会话。 可以使用会话的管理器或 pmcmd 命令来执行会话。 批处理执行可用于以串行方式或并行方式组合会话执行。 批处理可以具有以并行或串行方式结转的不同会话。 **9.一组可以一组会话多少次?** 一个人可以分组任意数量的会话,但如果批量中的会话数较少,则迁移起来会更容易。 **10.解释映射参数和映射变量之间的区别?** 当在会话执行期间值发生变化时,它称为映射变量。 完成后,Informatica 服务器将存储变量的结束值,并在会话重新启动时被重用。 此外,在会话执行期间未更改的那些值称为映射参数。 映射过程说明了映射参数及其用法。 在开始会话之前,将值分配给这些参数。 **11.什么是复杂映射?** 以下是复杂映射的功能。 * 困难的要求 * 大量的转换 * 复杂的业务逻辑 **12.在没有连接会话的情况下,如何识别映射是否正确?** 如果没有连接会话,则可以通过调试选项来查找会话是否正确。 **13.是否可以将在一个映射中创建的映射参数或变量用于任何其他可重用的转换?** 是的,可以这样做是因为可重用转换不包含任何 mapplet 或映射。 **14.解释使用聚合器缓存文件吗?** 每次运行期间,聚合器转换都以指令块的形式进行处理。 它存储在本地缓冲存储器中找到的过渡值。 如果需要额外的内存,聚合器会提供额外的缓存文件来存储转换值。 **15.简要描述查找转换?** 查找转换是那些有权访问基于 RDBMS 的数据集的转换。 服务器通过使用查找表查看显式表数据或数据库来加快访问速度。 通过匹配转换过程中交付的所有查找端口的查找条件来获得结论数据。 **16.角色扮演维度是什么意思?** 保留在同一数据库域中用于扮演多种角色的维度称为角色扮演维度。 **17.如何在不进行 SQL 或其他转换的情况下访问存储库报告?** 答案:存储库报告由元数据报告器建立。 由于它是一个 Web 应用程序,因此不需要 [SQL](/sql.html) 或其他转换。 **18.存储在存储库中的元数据类型是什么?** 元数据的类型包括源定义,目标定义,映射,Mapplet,转换。 **19.解释代码页兼容性吗?** 如果两个代码页具有相同的字符集,则数据从一个代码页移动到另一个代码页,则不会发生数据丢失。 源页面的所有特征必须在目标页面中可用。 而且,如果源页面的所有字符都没有出现在目标页面中,那么这将是一个子集,并且由于两个代码页不兼容的事实,在转换期间肯定会发生数据丢失。 **20.如何同时验证存储库中的所有映射?** 由于每次只能验证一个映射,因此无法同时验证所有映射。 **21.简要解释聚合器转换?** 它允许人们进行汇总计算,例如求和,平均值等。这与表达式转换不同,在表达式转换中,人们可以成组进行计算。 **22.描述表达式转换?** 在以这种形式的转换写入目标之前,可以在单行中计算值。 它可用于执行非聚合计算。 在输出结果进入目标表之前,也可以测试条件语句。 **23.滤波器转换是什么意思?** 它是过滤映射中的行的媒介。 需要通过过滤器转换对数据进行转换,然后应用过滤条件。 筛选器转换包含输入/输出的所有端口,并且满足条件的行只能通过该筛选器。 **24.什么是 Joiner 转换?** Joiner 转换将居住在不同位置的两个关联异构源组合在一起,而源限定符转换可以合并从公共源中出现的数据。 **25.什么是查找转换?** 它用于通过映射在关系表中查找数据。 任何关系数据库中的查找定义都是从具有连接客户端和服务器的趋势的源中导入的。 一个人可以在一个映射中使用多个查找转换。 **26.如何使用联合变换?** 回答:这是一个多样化的输入组转换,可用于合并来自不同来源的数据。 它的工作方式类似于 UNION SQL 中的 All 语句,用于组合两个 SELECT 语句的结果集。 **27.您是什么意思增量聚合?** 每当为映射聚合创建会话时,就启用增量聚合选项。 Power Center 通过映射和历史缓存数据执行增量聚合,以增量方式执行新的聚合计算。 **28.连接的查询和未连接的查询有什么区别?** 当直接从管道中的其他转换获取输入时,称为连接查找。 虽然未连接查找不会直接从其他转换中获取输入,但是它可以在任何转换中使用,并且可以使用 LKP 表达式作为函数来引发。 因此,可以说在映射中可以多次调用未连接的查找。 **29.什么是 mapplet?** 使用 mapplet 设计器的可回收对象称为 mapplet。 它允许一个人在多个映射中重用转换逻辑,而且它还包含一组转换。 **30.简要定义可重用转换?** 可重用转换在映射中被多次使用。 它与使用该转换的其他映射不同,因为它存储为元数据。 每当对可重用转换进行任何更改时,转换将在映射中无效。 **31.更新策略是什么意思,它的不同选择是什么?** 逐行处理由 informatica 完成。 每行都被插入到目标表中,因为它被标记为默认行。 每当必须根据某个顺序更新或插入行时,都会使用更新策略。 此外,必须在更新策略中指定条件,以将已处理的行标记为已更新或已插入。 **32.在什么情况下迫使 Informatica 服务器拒绝文件?** 当它在更新策略转换中面对 DD_Reject 时,就会发生这种情况。 此外,它破坏了行中压缩的数据库约束。 **33.什么是代理密钥?** 代理密钥是自然主密钥的替代。 它是表中每一行的唯一标识。 这是非常有益的,因为自然主键可以更改,这最终会使更新变得更加困难。 它们始终以数字或整数形式使用。 **34.要实现会话分区,先决条件是什么?** 为了执行会话分区,需要配置会话以对源数据进行分区,然后将 Informatica 服务器计算机安装在多重 CPU 中。 **35.在会话朗姆酒期间,信息服务器创建了哪些文件?** 在会话运行期间,创建的文件即错误日志,错误文件,工作流不足和会话日志。 **36.简要定义会话任务?** 这是指导 Power Center 服务器如何以及何时将数据从源传输到目标的大量指令。 **37.命令任务是什么意思?** 此特定任务允许在 [Unix](/unix-linux-tutorial.html) 或 Windows 中的 DOS 中使用一个或多个外壳命令在工作流程期间运行。 **38.什么是独立命令任务?** 可以在工作流中的任何位置使用此任务来运行 Shell 命令。 **39.会话前和会话后 shell 命令是什么意思?** 命令任务可以称为会话任务的会话前或会话后 Shell 命令。 可以将其作为会话前命令或会话后成功命令或会话后失败命令来运行。 **40.什么是预定义事件?** 这是一个文件监视事件。 它等待特定的文件到达特定的位置。 **41.如何定义用户自定义事件?** 用户定义的事件可以描述为工作流中的任务流。 可以创建事件,然后根据需要引发事件。 **42.什么是工作流程?** 回答:工作流是一堆指令,可以与服务器交流有关如何实现任务的信息。 **43.工作流管理器中有哪些不同的工具?** 以下是工作流管理器中的不同工具,即 * 任务设计师 * 工作器设计师 * 工作流程设计师 **44.除了工作流程管理器 pmcmd 之外,还告诉我其他任何用于计划目的的工具吗?** 除了工作流管理器之外,用于计划目的的工具可以是第三方工具,例如“ CONTROL M”。 **45.什么是 OLAP(在线分析处理)?** 进行多维分析的方法。 **46\. OLAP 有哪些不同类型? 举个例子?** ROLAP(例如 BO),MOLAP(例如 Cognos,HOLAP,DOLAP) **47\. Worklet 是什么意思?** 将工作流任务分组在一组中时,称为工作集。 工作流程任务包括计时器,决策,命令,事件等待,邮件,会话,链接,分配,控制等。 **48.目标设计器有什么用?** 目标定义是在目标设计者的帮助下创建的。 **49.我们在哪里可以找到 informatica 的吞吐量选项?** 吞吐量选项可以在工作流监视器的 informatica 中找到。 在工作流监视器中,右键单击会话,然后单击获取运行属性,然后在源/目标统计信息下找到吞吐量选项。 **50.什么是目标装载顺序?** 回答:目标加载顺序是根据映射中的源限定符指定的。 如果有多个源限定符链接到不同的目标,则可以按顺序授权 Informatica 服务器将数据加载到目标中。 **1\. What do you mean by Enterprise Data Warehousing?** When the organization data is created at a single point of access it is called as enterprise data warehousing. Data can be provided with a global view to the server via a single source store. One can do periodic analysis on that same source. It gives better results but however the time required is high. When the organization data is created at a single point of access it is called as enterprise data warehousing. Data can be provided with a global view to the server via a single source store. One can do periodic analysis on that same source. It gives better results but however the time required is high. When the organization data is created at a single point of access it is called as enterprise data warehousing. Data can be provided with a global view to the server via a single source store. One can do periodic analysis on that same source. It gives better results but however the time required is high. **2\. What the difference is between a database, a data warehouse and a data mart?** Database includes a set of sensibly affiliated data which is normally small in size as compared to data warehouse. While in data warehouse there are assortments of all sorts of data and data is taken out only according to the customer's needs. On the other hand datamart is also a set of data which is designed to cater the needs of different domains. For instance an organization having different chunk of data for its different departments i.e. sales, finance, marketing etc. Database includes a set of sensibly affiliated data which is normally small in size as compared to data warehouse. While in data warehouse there are assortments of all sorts of data and data is taken out only according to the customer's needs. On the other hand datamart is also a set of data which is designed to cater the needs of different domains. For instance an organization having different chunk of data for its different departments i.e. sales, finance, marketing etc. Database includes a set of sensibly affiliated data which is normally small in size as compared to data warehouse. While in data warehouse there are assortments of all sorts of data and data is taken out only according to the customer's needs. On the other hand datamart is also a set of data which is designed to cater the needs of different domains. For instance an organization having different chunk of data for its different departments i.e. sales, finance, marketing etc. **3\. What is meant by a domain?** When all related relationships and nodes are covered by a sole organizational point, its called domain. Through this data management can be improved. When all related relationships and nodes are covered by a sole organizational point, its called domain. Through this data management can be improved. When all related relationships and nodes are covered by a sole organizational point, its called domain. Through this data management can be improved. **4\. What is the difference between a repository server and a powerhouse?** Repository server controls the complete repository which includes tables, charts, and various procedures etc. Its main function is to assure the repository integrity and consistency. While a powerhouse server governs the implementation of various processes among the factors of server's database repository. Repository server controls the complete repository which includes tables, charts, and various procedures etc. Its main function is to assure the repository integrity and consistency. While a powerhouse server governs the implementation of various processes among the factors of server's database repository. Repository server controls the complete repository which includes tables, charts, and various procedures etc. Its main function is to assure the repository integrity and consistency. While a powerhouse server governs the implementation of various processes among the factors of server's database repository. **5\. How many repositories can be created in informatica?** There can be any number of repositories in informatica but eventually it depends on number of ports. There can be any number of repositories in informatica but eventually it depends on number of ports. There can be any number of repositories in informatica but eventually it depends on number of ports. **6\. What is the benefit of partitioning a session?** Partitioning a session means solo implementation sequences within the session. It's main purpose is to improve server's operation and efficiency. Other transformations including extractions and other outputs of single partitions are carried out in parallel. Partitioning a session means solo implementation sequences within the session. It's main purpose is to improve server's operation and efficiency. Other transformations including extractions and other outputs of single partitions are carried out in parallel. Partitioning a session means solo implementation sequences within the session. It's main purpose is to improve server's operation and efficiency. Other transformations including extractions and other outputs of single partitions are carried out in parallel. **7\. How are indexes created after completing the load process?** For the purpose of creating indexes after the load process, command tasks at session level can be used. Index creating scripts can be brought in line with the session's workflow or the post session implementation sequence. Moreover this type of index creation cannot be controlled after the load process at transformation level. For the purpose of creating indexes after the load process, command tasks at session level can be used. Index creating scripts can be brought in line with the session's workflow or the post session implementation sequence. Moreover this type of index creation cannot be controlled after the load process at transformation level. For the purpose of creating indexes after the load process, command tasks at session level can be used. Index creating scripts can be brought in line with the session's workflow or the post session implementation sequence. Moreover this type of index creation cannot be controlled after the load process at transformation level. **8\. Explain sessions. Explain how batches are used to combine executions?** A teaching set that needs to be implemented to convert data from a source to a target is called a session. Session can be carried out using the session's manager or pmcmd command. Batch execution can be used to combine sessions executions either in serial manner or in a parallel. Batches can have different sessions carrying forward in a parallel or serial manner. A teaching set that needs to be implemented to convert data from a source to a target is called a session. Session can be carried out using the session's manager or pmcmd command. Batch execution can be used to combine sessions executions either in serial manner or in a parallel. Batches can have different sessions carrying forward in a parallel or serial manner. A teaching set that needs to be implemented to convert data from a source to a target is called a session. Session can be carried out using the session's manager or pmcmd command. Batch execution can be used to combine sessions executions either in serial manner or in a parallel. Batches can have different sessions carrying forward in a parallel or serial manner. **9\. How many number of sessions can one group in batches?** One can group any number of sessions but it would be easier for migration if the number of sessions are lesser in a batch. One can group any number of sessions but it would be easier for migration if the number of sessions are lesser in a batch. One can group any number of sessions but it would be easier for migration if the number of sessions are lesser in a batch. **10\. Explain the difference between mapping parameter and mapping variable?** When values change during the session's execution it's called a mapping variable. Upon completion the Informatica server stores the end value of a variable and is reused when session restarts. Moreover those values that do not change during the sessions execution are called mapping parameters. Mapping procedure explains mapping parameters and their usage. Values are allocated to these parameters before starting the session. When values change during the session's execution it's called a mapping variable. Upon completion the Informatica server stores the end value of a variable and is reused when session restarts. Moreover those values that do not change during the sessions execution are called mapping parameters. Mapping procedure explains mapping parameters and their usage. Values are allocated to these parameters before starting the session. When values change during the session's execution it's called a mapping variable. Upon completion the Informatica server stores the end value of a variable and is reused when session restarts. Moreover those values that do not change during the sessions execution are called mapping parameters. Mapping procedure explains mapping parameters and their usage. Values are allocated to these parameters before starting the session. **11.What is complex mapping?** Following are the features of complex mapping. * 困难的要求 * 大量的转换 * 复杂的业务逻辑 Following are the features of complex mapping. * 困难的要求 * 大量的转换 * 复杂的业务逻辑 Following are the features of complex mapping. * 困难的要求 * 大量的转换 * 复杂的业务逻辑 **12\. How can one identify whether mapping is correct or not without connecting session?** One can find whether the session is correct or not without connecting the session is with the help of debugging option. One can find whether the session is correct or not without connecting the session is with the help of debugging option. One can find whether the session is correct or not without connecting the session is with the help of debugging option. **13\. Can one use mapping parameter or variables created in one mapping into any other reusable transformation?** Yes, One can do because reusable transformation does not contain any mapplet or mapping. Yes, One can do because reusable transformation does not contain any mapplet or mapping. Yes, One can do because reusable transformation does not contain any mapplet or mapping. **14\. Explain the use of aggregator cache file?** Aggregator transformations are handled in chunks of instructions during each run. It stores transitional values which are found in local buffer memory. Aggregators provides extra cache files for storing the transformation values if extra memory is required. Aggregator transformations are handled in chunks of instructions during each run. It stores transitional values which are found in local buffer memory. Aggregators provides extra cache files for storing the transformation values if extra memory is required. Aggregator transformations are handled in chunks of instructions during each run. It stores transitional values which are found in local buffer memory. Aggregators provides extra cache files for storing the transformation values if extra memory is required. **15\. Briefly describe lookup transformation?** Lookup transformations are those transformations which have admission right to RDBMS based data set. The server makes the access faster by using the lookup tables to look at explicit table data or the database. Concluding data is achieved by matching the look up condition for all look up ports delivered during transformations. Lookup transformations are those transformations which have admission right to RDBMS based data set. The server makes the access faster by using the lookup tables to look at explicit table data or the database. Concluding data is achieved by matching the look up condition for all look up ports delivered during transformations. Lookup transformations are those transformations which have admission right to RDBMS based data set. The server makes the access faster by using the lookup tables to look at explicit table data or the database. Concluding data is achieved by matching the look up condition for all look up ports delivered during transformations. **16\. What does role playing dimension mean?** The dimensions that are utilized for playing diversified roles while remaining in the same database domain are called role playing dimensions. The dimensions that are utilized for playing diversified roles while remaining in the same database domain are called role playing dimensions. The dimensions that are utilized for playing diversified roles while remaining in the same database domain are called role playing dimensions. **17\. How can repository reports be accessed without SQL or other transformations?** Ans:Repositoryreports are established by metadata reporter. There is no need of [SQL](/sql.html) or other transformation since it is a web app. Ans:Repositoryreports are established by metadata reporter. There is no need of [SQL](/sql.html) or other transformation since it is a web app. Ans:Repositoryreports are established by metadata reporter. There is no need of [SQL](/sql.html) or other transformation since it is a web app. **18\. What are the types of metadata that stores in repository?** The types of metadata includes Source definition, Target definition, Mappings, Mapplet, Transformations. The types of metadata includes Source definition, Target definition, Mappings, Mapplet, Transformations. The types of metadata includes Source definition, Target definition, Mappings, Mapplet, Transformations. **19\. Explain the code page compatibility?** When data moves from one code page to another provided that both code pages have the same character sets then data loss cannot occur. All the characteristics of source page must be available in the target page. Moreover if all the characters of source page are not present in the target page then it would be a subset and data loss will definitely occur during transformation due the fact the two code pages are not compatible. When data moves from one code page to another provided that both code pages have the same character sets then data loss cannot occur. All the characteristics of source page must be available in the target page. Moreover if all the characters of source page are not present in the target page then it would be a subset and data loss will definitely occur during transformation due the fact the two code pages are not compatible. When data moves from one code page to another provided that both code pages have the same character sets then data loss cannot occur. All the characteristics of source page must be available in the target page. Moreover if all the characters of source page are not present in the target page then it would be a subset and data loss will definitely occur during transformation due the fact the two code pages are not compatible. **20\. How can you validate all mappings in the repository simultaneously?** All the mappings cannot be validated simultaneously because each time only one mapping can be validated. All the mappings cannot be validated simultaneously because each time only one mapping can be validated. All the mappings cannot be validated simultaneously because each time only one mapping can be validated. **21\. Briefly explain the Aggregator transformation?** It allows one to do aggregate calculations such as sums, averages etc. It is unlike expression transformation in which one can do calculations in groups. It allows one to do aggregate calculations such as sums, averages etc. It is unlike expression transformation in which one can do calculations in groups. It allows one to do aggregate calculations such as sums, averages etc. It is unlike expression transformation in which one can do calculations in groups. **22\. Describe Expression transformation?** Values can be calculated in single row before writing on the target in this form of transformation. It can be used to perform non aggregate calculations. Conditional statements can also be tested before output results go to target tables. Values can be calculated in single row before writing on the target in this form of transformation. It can be used to perform non aggregate calculations. Conditional statements can also be tested before output results go to target tables. Values can be calculated in single row before writing on the target in this form of transformation. It can be used to perform non aggregate calculations. Conditional statements can also be tested before output results go to target tables. **23\. What do you mean by filter transformation?** It is a medium of filtering rows in a mapping. Data needs to be transformed through filter transformation and then filter condition is applied. Filter transformation contains all ports of input/output, and the rows which meet the condition can only pass through that filter. It is a medium of filtering rows in a mapping. Data needs to be transformed through filter transformation and then filter condition is applied. Filter transformation contains all ports of input/output, and the rows which meet the condition can only pass through that filter. It is a medium of filtering rows in a mapping. Data needs to be transformed through filter transformation and then filter condition is applied. Filter transformation contains all ports of input/output, and the rows which meet the condition can only pass through that filter. **24\. What is Joiner transformation?** Joiner transformation combines two affiliated heterogeneous sources living in different locations while a source qualifier transformation can combine data emerging from a common source. Joiner transformation combines two affiliated heterogeneous sources living in different locations while a source qualifier transformation can combine data emerging from a common source. Joiner transformation combines two affiliated heterogeneous sources living in different locations while a source qualifier transformation can combine data emerging from a common source. **25\. What is Lookup transformation?** It is used for looking up data in a relational table through mapping. Lookup definition from any relational database is imported from a source which has tendency of connecting client and server. One can use multiple lookup transformation in a mapping. It is used for looking up data in a relational table through mapping. Lookup definition from any relational database is imported from a source which has tendency of connecting client and server. One can use multiple lookup transformation in a mapping. It is used for looking up data in a relational table through mapping. Lookup definition from any relational database is imported from a source which has tendency of connecting client and server. One can use multiple lookup transformation in a mapping. **26\. How Union Transformation is used?** Ans: It is a diverse input group transformation which can be used to combine data from different sources. It works like UNION All statement in SQL that is used to combine result set of two SELECT statements. Ans: It is a diverse input group transformation which can be used to combine data from different sources. It works like UNION All statement in SQL that is used to combine result set of two SELECT statements. Ans: It is a diverse input group transformation which can be used to combine data from different sources. It works like UNION All statement in SQL that is used to combine result set of two SELECT statements. **27\. What do you mean Incremental Aggregation?** Option for incremental aggregation is enabled whenever a session is created for a mapping aggregate. Power center performs incremental aggregation through the mapping and historical cache data to perform new aggregation calculations incrementally. Option for incremental aggregation is enabled whenever a session is created for a mapping aggregate. Power center performs incremental aggregation through the mapping and historical cache data to perform new aggregation calculations incrementally. Option for incremental aggregation is enabled whenever a session is created for a mapping aggregate. Power center performs incremental aggregation through the mapping and historical cache data to perform new aggregation calculations incrementally. **28\. What is the difference between a connected look up and unconnected look up?** When the inputs are taken directly from other transformations in the pipeline it is called connected lookup. While unconnected lookup doesn't take inputs directly from other transformations, but it can be used in any transformations and can be raised as a function using LKP expression. So it can be said that an unconnected lookup can be called multiple times in mapping. When the inputs are taken directly from other transformations in the pipeline it is called connected lookup. While unconnected lookup doesn't take inputs directly from other transformations, but it can be used in any transformations and can be raised as a function using LKP expression. So it can be said that an unconnected lookup can be called multiple times in mapping. When the inputs are taken directly from other transformations in the pipeline it is called connected lookup. While unconnected lookup doesn't take inputs directly from other transformations, but it can be used in any transformations and can be raised as a function using LKP expression. So it can be said that an unconnected lookup can be called multiple times in mapping. **29\. What is a mapplet?** A recyclable object that is using mapplet designer is called a mapplet. It permits one to reuse the transformation logic in multitude mappings moreover it also contains set of transformations. A recyclable object that is using mapplet designer is called a mapplet. It permits one to reuse the transformation logic in multitude mappings moreover it also contains set of transformations. A recyclable object that is using mapplet designer is called a mapplet. It permits one to reuse the transformation logic in multitude mappings moreover it also contains set of transformations. **30.Briefly define reusable transformation?** Reusable transformation is used numerous times in mapping. It is different from other mappings which use the transformation since it is stored as a metadata. The transformations will be nullified in the mappings whenever any change in the reusable transformation is made. Reusable transformation is used numerous times in mapping. It is different from other mappings which use the transformation since it is stored as a metadata. The transformations will be nullified in the mappings whenever any change in the reusable transformation is made. Reusable transformation is used numerous times in mapping. It is different from other mappings which use the transformation since it is stored as a metadata. The transformations will be nullified in the mappings whenever any change in the reusable transformation is made. **31\. What does update strategy mean, and what are the different option of it?** Row by row processing is done by informatica. Every row is inserted in the target table because it is marked as default. Update strategy is used whenever the row has to be updated or inserted based on some sequence. Moreover the condition must be specified in update strategy for the processed row to be marked as updated or inserted. Row by row processing is done by informatica. Every row is inserted in the target table because it is marked as default. Update strategy is used whenever the row has to be updated or inserted based on some sequence. Moreover the condition must be specified in update strategy for the processed row to be marked as updated or inserted. Row by row processing is done by informatica. Every row is inserted in the target table because it is marked as default. Update strategy is used whenever the row has to be updated or inserted based on some sequence. Moreover the condition must be specified in update strategy for the processed row to be marked as updated or inserted. **32\. What is the scenario which compels informatica server to reject files?** This happens when it faces DD_Reject in update strategy transformation. Moreover it disrupts the database constraint filed in the rows was condensed. This happens when it faces DD_Reject in update strategy transformation. Moreover it disrupts the database constraint filed in the rows was condensed. This happens when it faces DD_Reject in update strategy transformation. Moreover it disrupts the database constraint filed in the rows was condensed. **33\. What is surrogate key?** Surrogate key is a replacement for the natural prime key. It is a unique identification for each row in the table. It is very beneficial because the natural primary key can change which eventually makes update more difficult. They are always used in form of a digit or integer. Surrogate key is a replacement for the natural prime key. It is a unique identification for each row in the table. It is very beneficial because the natural primary key can change which eventually makes update more difficult. They are always used in form of a digit or integer. Surrogate key is a replacement for the natural prime key. It is a unique identification for each row in the table. It is very beneficial because the natural primary key can change which eventually makes update more difficult. They are always used in form of a digit or integer. **34.What are the prerequisite tasks to achieve the session partition?** In order to perform session partition one need to configure the session to partition source data and then installing the Informatica server machine in multifold CPU's. In order to perform session partition one need to configure the session to partition source data and then installing the Informatica server machine in multifold CPU's. In order to perform session partition one need to configure the session to partition source data and then installing the Informatica server machine in multifold CPU's. **35\. Which files are created during the session rums by informatics server?** During session runs, the files created are namely Errors log, Bad file, Workflow low and session log. During session runs, the files created are namely Errors log, Bad file, Workflow low and session log. During session runs, the files created are namely Errors log, Bad file, Workflow low and session log. **36\. Briefly define a session task?** It is a chunk of instruction the guides Power center server about how and when to transfer data from sources to targets. It is a chunk of instruction the guides Power center server about how and when to transfer data from sources to targets. It is a chunk of instruction the guides Power center server about how and when to transfer data from sources to targets. **37\. What does command task mean?** This specific task permits one or more than one shell commands in [Unix](/unix-linux-tutorial.html) or DOS in windows to run during the workflow. This specific task permits one or more than one shell commands in [Unix](/unix-linux-tutorial.html) or DOS in windows to run during the workflow. This specific task permits one or more than one shell commands in [Unix](/unix-linux-tutorial.html) or DOS in windows to run during the workflow. **38\. What is standalone command task?** This task can be used anywhere in the workflow to run the shell commands. This task can be used anywhere in the workflow to run the shell commands. This task can be used anywhere in the workflow to run the shell commands. **39\. What is meant by pre and post session shell command?** Command task can be called as the pre or post session shell command for a session task. One can run it as pre session command r post session success command or post session failure command. Command task can be called as the pre or post session shell command for a session task. One can run it as pre session command r post session success command or post session failure command. Command task can be called as the pre or post session shell command for a session task. One can run it as pre session command r post session success command or post session failure command. **40.What is predefined event?** It is a file-watch event. It waits for a specific file to arrive at a specific location. It is a file-watch event. It waits for a specific file to arrive at a specific location. It is a file-watch event. It waits for a specific file to arrive at a specific location. **41\. How can you define user defied event?** User defined event can be described as a flow of tasks in the workflow. Events can be created and then raised as need arises. User defined event can be described as a flow of tasks in the workflow. Events can be created and then raised as need arises. User defined event can be described as a flow of tasks in the workflow. Events can be created and then raised as need arises. **42\. What is a work flow?** Ans: Work flow is a bunch of instructions that communicates server about how to implement tasks. Ans: Work flow is a bunch of instructions that communicates server about how to implement tasks. Ans: Work flow is a bunch of instructions that communicates server about how to implement tasks. **43\. What are the different tools in workflow manager?** Following are the different tools in workflow manager namely * 任务设计师 * 工作器设计师 * 工作流程设计师 Following are the different tools in workflow manager namely * 任务设计师 * 工作器设计师 * 工作流程设计师 Following are the different tools in workflow manager namely * 任务设计师 * 工作器设计师 * 工作流程设计师 **44\. Tell me any other tools for scheduling purpose other than workflow manager pmcmd?** The tool for scheduling purpose other than workflow manager can be a third party tool like 'CONTROL M'. The tool for scheduling purpose other than workflow manager can be a third party tool like 'CONTROL M'. The tool for scheduling purpose other than workflow manager can be a third party tool like 'CONTROL M'. **45\. What is OLAP (On-Line Analytical Processing?** A method by which multi-dimensional analysis occurs. A method by which multi-dimensional analysis occurs. A method by which multi-dimensional analysis occurs. **46\. What are the different types of OLAP? Give an example?** ROLAP eg.BO, MOLAP eg.Cognos, HOLAP, DOLAP ROLAP eg.BO, MOLAP eg.Cognos, HOLAP, DOLAP ROLAP eg.BO, MOLAP eg.Cognos, HOLAP, DOLAP **47\. What do you mean by worklet?** When the workflow tasks are grouped in a set, it is called as worklet. Workflow tasks includes timer, decision, command, event wait, mail, session, link, assignment, control etc. When the workflow tasks are grouped in a set, it is called as worklet. Workflow tasks includes timer, decision, command, event wait, mail, session, link, assignment, control etc. When the workflow tasks are grouped in a set, it is called as worklet. Workflow tasks includes timer, decision, command, event wait, mail, session, link, assignment, control etc. **48\. What is the use of target designer?** Target Definition is created with the help of target designer. Target Definition is created with the help of target designer. Target Definition is created with the help of target designer. **49\. Where can we find the throughput option in informatica?** Throughput option can be found in informatica in workflow monitor. In workflow monitor, right click on session, then click on get run properties and under source/target statistics we can find throughput option. Throughput option can be found in informatica in workflow monitor. In workflow monitor, right click on session, then click on get run properties and under source/target statistics we can find throughput option. Throughput option can be found in informatica in workflow monitor. In workflow monitor, right click on session, then click on get run properties and under source/target statistics we can find throughput option. **50\. What is target load order?** Ans: Target load order is specified on the basis of source qualifiers in a mapping. If there are multifold source qualifiers linked to different targets then one can entitle order in which informatica server loads data into targets. Ans: Target load order is specified on the basis of source qualifiers in a mapping. If there are multifold source qualifiers linked to different targets then one can entitle order in which informatica server loads data into targets. Ans: Target load order is specified on the basis of source qualifiers in a mapping. If there are multifold source qualifiers linked to different targets then one can entitle order in which informatica server loads data into targets.