Community Server

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Monday, 20 February 2012

Talend Open Studio: Populating a date dimension

Posted on 14:04 by Unknown

Populating a date dimension

Date dimensions are an essential part of a data warehouse. Usually they are only populated once. Scripts can be created on the database side (as outlined here), but if you are working on various projects involving a variety of databases, it is more efficient to create just one ETL job which can be used to populate any database.

In this tutorial we will have a look at creating such an ETL job with Talend Open Studio for Data Integration. We will create a basic date dimension which you can then extend even further. In order to follow this tutorial, the reader should be familiar with the basic functionality of Talend Open Studio.

Our date dimension will look like this one (partial screenshot):


The primary key of the dimension will be an integer representation of the date, which saves us the hassle of looking up the key when we transform the fact data.

Open Talend Open Studio for Data Integration and create a new job called populate_date_dimension. First we will define a variable called start date, because we will use this job in various projects and we might require a different start date each time:

Click on the Context tab and then on the + button to add a new context variable. Give it the name myStartDate of type Date and define a value for it.


Next add a tRowGenerator component to the design area and double click on it to activate the settings dialog. The idea is to create X amount of rows: The first row will hold our start date and each subsequent row will increment the date by one day.


  1. Click the + button to add a new column. Name it date and set the type to Date.
  2. Click in the Environment variables cell on the right hand side and then you will see the parameters displayed in the Function parameters tab on the bottom left hand side.
  3. Define the number of rows that should be generated in Number of Rows for RowGenerator.
  4. In the Function parameters tab set the date parameter value to context.myStartDate. This will ensure that the context variable which we defined earlier will be used.
  5. Set the nb parameter to Numeric.sequence(“s1”, 1, 1) - 1. Use the expression builder for a more convenient setup. This will create a sequence which we will use to add days to our start date. The reason why we subtract 1 at the end is because we want to keep our start date.
  6. Set the dateType parameter value to “dd”. This ensures that days will be added to our date.
  7. Click on the Preview tab and check if the result set looks as expected.
  8. Click Ok to close the component settings.


Now add a tMap component and create a row from the tRowGenerator to the tMap component. Double click the tMap component:




  1. Click the + button on the right hand side to create a new output table.
  2. Add new columns to the output table and for each of them define a specific date format using this approach: Integer.parseInt(TalendDate.formatDate("yyyyMMdd",row1.date)) for integer values and TalendDate.formatDate("MM",row1.date) for string values. Have a look at the Java SimpleDateFormat specs to get an understanding of all the formatting options. You will spend now some time setting all the various date formats up.
  3. Java SimpleDateFormat doesn’t provide a quarter format, hence we have to create our own in the form of a ceiled devision / covered quotient: (Integer.parseInt(TalendDate.formatDate("M",row1.date))+3-1) /  3   
  4. Click Ok.


Add a database output component of your choice (in my case I used one for PostgreSQL) and create a row from the tMap to the database output component. Double click the database output component and provide all the necessary settings. That’s it: Now you can run the job and examine the data in your table.

Read More
Posted in Data Warehouse, Date Dimension, ETL, Kimball, Talend Open Studio | No comments

Saturday, 18 February 2012

Talend Open Studio: Scheduling and command line execution

Posted on 14:31 by Unknown

Talend Open Studio: Scheduling and command line execution

In this tutorial we will take a look at how to export a Talend Open Studio ETL job to an autonomous folder and schedule the job via crontab. In order to follow this tutorial, the reader should be familiar with the basic functionality of Talend Open Studio for Data Integration.


How to export a job


Right click on your job and choose Export job.


In the export settings define:
  • the export folder and file name
  • the Job Version
  • set the Export type to Autonomous Job
  • tick Export dependencies
  • define the Context and tick Apply to children
Click on Finish and your job will be exported.


How to execute the job from the command line


Navigate to the folder where the zip file was exported to and unzip it. Then navigate to:


<jobname>_<version>/<jobname>

Within this folder you will find an executable shell and/or batch file:


Open this file in a text editor:


Note that the context is defined as a command line argument. It is currently set to the value which you specified on export, but you can change it any time to another value here.

To execute the job on the command line simply navigate to this folder and run:
sh ./<jobname>_run.sh



How to execute a job with specific context variables

As you might have guessed, the approach is very similar to the one shown above, we just add command line arguments:

sh ./<jobname>_run.sh --context_param variable1=value1 --
context_param variable2=value2



How to change the default context variables

If you ever need to change the value of any of your context variables, you can find the property file for each context in:

<jobname>_<version>/<jobname>/<projectname>/<jobname>_<version>/contexts/

Which in my case is:

Open one of them to understand how they are structured:

As you can see it is extremely easy to change these values.


How to schedule a job

If you make use of context variables regularly, then it is best to include them directly in the *_run.sh or *_run.bat file. Just open the file with your favourite text editor and add the variables after the context argument similar to this one:
Ideally though, especially if you are dealing with dates, you want to make this more dynamic, like this one:
On Linux use Crontab to schedule a job:

crontab -e

And then set it up similar to the one shown below:

On Windows you can use the Windows Scheduler. As this one has a GUI, it is quite straight forward to set it up and hence will not be explained here.
Read More
Posted in Command Line Arguments, ETL, Scheduling, Talend Open Studio | No comments

Friday, 17 February 2012

Talend: Setting up database logging for a project

Posted on 13:13 by Unknown

Talend: Setting up database logging for a project

When executing Talend ETL jobs it is quite important to store the logging information. This is not only relevant for the last execution of an ETL job, but keeping a longer logging history can be quite an advantage. This logging information can be stored in flat files or in a database. We will have a look at the latter option here. I will not go into too much detail, but provide a quick overview. I expect that your are familiar with the basic functionality of Talend Open Studio for Data Integration.
  1. Open Talend and create a new job. Drop the tLogCatcher, tStatCatcher, tMeterCatcher components on the design area:
  2. Click on tLogCatcher. In the component settings click on Edit Schema. In the schema dialog click on the save button. Give the schema a name. Do the same for tStatChatcher and tMeterCatcher. The schemas will then show up in the repository under Generic schemas:
  3. As we don’t need these components any more, deactivate all of them by right clicking on them and choosing deactivate.
  4. Create a new connection to your database of choice in the repository. This will be the database where all the logging data will be stored.
  5. Next we will create the logging tables: Add three tCreateTable to the design area and link each of them with an onSubjobOk row:
  6. In the component settings, assign for each of them the repository database connection.
  7. In the component settings, assign one of the generic repository schemas to one of the three components and assign a table name:
  8. Run the job. All the three logging tables should now exist in your database.
  9. Let’s add these tables to the repository database connection we defined early: Right click on the connection and choose Retrieve schema. Choose the three logging tables and click Ok.
  10. Now we can assign these repository schemas/table definitions to the project settings. In the main menu click on File > Edit project properties. Click on Stats & Logs and then tick On Databases. Assign the repository database connection and assign the respective repository schemata  to the log tables. Finally tick Catch components statistics.

Now logging is set up for your project.


In future you can run some simple SQL statements to retrieve some info about the performance of your ETL jobs.
Read More
Posted in ETL, Talend | No comments

Sunday, 12 February 2012

PostgreSQL: Auto generating a sample dataset

Posted on 09:15 by Unknown
Sometimes you just want to create simple sample datasets for tables quickly. If you are lucky your database provides some native SQL functions for this purpose.

PostgreSQL has quite an interesting recursive WITH statement which allows you to create a loop: You can define in example a value that is incremented with each iteration until a certain condition is met. This comes in quite handy when creating sample data for a date dimension in example:

Let's keep it very simple:


WITH RECURSIVE date_generator(date) AS (
   VALUES (DATE '2012-01-01')
 UNION ALL
   SELECT date+1 FROM date_generator WHERE date < DATE '2012-01-15'
)
SELECT
date
, EXTRACT(DAY FROM date) AS day
, EXTRACT(MONTH FROM date) AS month
, EXTRACT(QUARTER FROM date) AS quarter
, EXTRACT(YEAR FROM date) AS year
FROM
date_generator
;


In the WITH statement we provide a start date (in this case 2012-01-01) and increase it by 1 until a specific end date (in this case '2012-01-15') is reached. In the main query we make use of the auto generated dates by extracting various date periods. The output looks like this:

Now you can easily change to above query to insert the auto generated data into a table. This is a very elegant solution as everything can be set up using standard SQL. Have look the official PostgreSQL documentation for more information.

Read More
Posted in | No comments

Monday, 30 January 2012

Pentaho Data Integration (Kettle) Big Data features go Open Source

Posted on 14:26 by Unknown
I just read some pretty exciting news: All the Big Data features in Pentaho Data Integration (Kettle) will be made available in the open source version. This means you have now free access to all the Hadoop, Cassandra, HBase, MongoDB steps and job entries, a move which will certainly increase the popularity of this ETL tool even further. The Kettle GUI allows you to easily create transformations and jobs which import, transform, export etc your data.
You can find a very interesting tutorial on how to design a MapReduce job with Kettle here.
Read More
Posted in | No comments

Saturday, 21 January 2012

Comparison of resource sharing features in open source reporting

Posted on 09:52 by Unknown

Comparison of resource sharing features in open source reporting tools

In large scale reporting projects sharing resources is key for efficiency. This article tries to compare the three most popular open source tools to understand to which extend their feature set supports the various resource files that are likely to be shared.

As an example: Maintain style definitions in a centralized global file. Instead of having to change a particular style in all the reports, this can simple be done in one global file. The clear advantages are: (1) It saves a lot of time and (2) consistency: every style is defined the same way for all reports and it is not as error prone as changing the style in every report.

Apart from styles there are several other report properties which can be maintained in global external files to achieve a similar effect.



Read More
Posted in | No comments

Sunday, 8 January 2012

Book review: Agile Analytics: A Value-Driven Approach to Business Intelligence and Data Warehousing: Delivering the Promise of Business Intelligence

Posted on 02:55 by Unknown
A high percentage of classic waterfall model business intelligence projects are failing. One of the main reasons for this is that the waterfall model is a sequential model: Simply speaking, you plan first, then develop, then test etc and then the whole project is supposed to be finished. BI project normally span several months. During this time, requirements change, the priority of the requirements might change, the understanding of the client in regards to what BI actually is changes, etc. A classic waterfall model cannot  accomodate these changing requirements easily over time as planing was done in the first step only.
With "Agile Analytics: A Value-Driven Approach to Business Intelligence and Data Warehousing: Delivering the Promise of Business Intelligence" Ken Collier introduces agile methodology to BI projects.  Readers new to the agile methodology will find a detailed introduction (i.e. What are user stories? How do I conduct an agile project?), learn about the iterative cycles (sprints) which allow feedback driven development, various other approaches like test driven development, continuous integration and much more. Ken Collier also introduces the Message Driven Warehouse which among other benefits allows to easily and quickly implement new requirements.
In a nutshell, it is one of the best books on BI which I read last year, so I can only highly recommend it.

Read More
Posted in | No comments
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • Pentaho Kettle Parameters and Variables: Tips and Tricks
    Pentaho Kettle Parameters and Variables: Tips and Tricks This blog post is not intended to be a formal introduction to using parameters and ...
  • Using Parameters in Pentaho Report Designer
    Using Parameters in Pentaho Report Designer Introduction How to define a parameter Additional info about the new parameter dialog Hidden Par...
  • Pentaho Data Integration: Scheduling and command line arguments
    Pentaho Data Integration (Kettle): Command line arguments and scheduling Tutorial Details Software: PDI/Kettle 4.1 (download here ), MySQL S...
  • Jaspersoft iReport: How to pass a parameter to a sub-dataset
    Jaspersoft iReport: How to pass a parameter to a sub-dataset Let’s say our main report is grouped by continent and the details band holds sa...
  • Pentaho Data Integration and Infinidb Series: Bulk Upload
    Pentaho Data Integration and InfiniDB Series: Bulk Upload Introduction Prepare Tables Using mainly Kettle steps Check if file exists Setup I...
  • Using regular expressions with Pentah...
    Using regular expressions with Pentaho Data Integration (Kettle) There are quite some transformations steps that allow you to work with regu...
  • Pentaho Data Integration: Remote execution with Carte
    Pentaho Data Integration: Remote execution with Carte Tutorial Details Software: PDI/Kettle 4.1 (download  here ), installed on your PC and ...
  • How to create a loop in Pentaho Kettle
    I finished my first ever video tutorial! This video will demonstrate you how easy it is to create a loop in Pentaho Kettle. Enjoy!
  • How to Set Up Pentaho Community Build Framework
    How to Set Up Pentaho Community Build Framework  Introduction Pentaho BI Server Setup with CBF Java, Tomcat and Ant Set environment variable...
  • Metadata Driven ETL and Reporting
    Metadata Driven ETL and Reporting with Pentaho Data Integration and Report Designer Tutorial Details  Software : If PDI Kettle 4.2 GA and PR...

Categories

  • "Bulk Loader"
  • "Bulk Loading"
  • "Hadoop"
  • "Kettle"
  • "Pentaho Book"
  • "Pentaho Data Integration"
  • "Pentaho Kettle"
  • "Pentaho Report Designer MDX MQL JDBC Parameters How To"
  • "Pentaho Report Designer MDX Parameters"
  • "Pentaho Report Designer MQL Parameters"
  • "Pentaho Report Designer Parmaters"
  • "Pentaho Report Designer"
  • "Pentaho Reporting 3.5 for Java Developers"
  • "Pentaho Reporting Book"
  • "Routing"
  • "Schema Workbench"
  • "Testing"
  • "Unicode"
  • "Unit testing"
  • "UTF8"
  • Agile development
  • automated testing
  • Big Data
  • Book Review
  • C-Tools
  • CBF
  • Clustered transformation
  • Command Line Arguments
  • Community Build Framework
  • D3JS
  • Dashboarding
  • Data Integration
  • Data Warehouse
  • Database Change Management
  • Database Version Control
  • Date Dimension
  • DBFit
  • ETL
  • ETLFit
  • Federated database
  • Google Charts
  • Google Visualization API
  • Hadoop
  • HTML5
  • iReport
  • JasperReports
  • JasperSoft
  • JasperStudio
  • Kettle
  • Kimball
  • Loop
  • Master data management
  • Metadata
  • Metedata editor
  • Mondrian
  • multidimensional modeling
  • OLAP
  • Open Source
  • Parameter
  • Parameters
  • Pentaho
  • Pentaho BI Server
  • Pentaho Data Integration
  • Pentaho Data Integration 4 Cookbook
  • Pentaho Kettle
  • Pentaho Metadata Editor Tutorial
  • Pentaho Report Designer
  • PostgreSQL
  • PRD
  • Report Layout
  • REST
  • Routing
  • Saiku
  • Scheduling
  • Slowly Changing Dimension
  • Sqitch
  • SVG
  • Talend
  • Talend MDM
  • Talend Open Studio
  • Tutorial
  • Variable
  • Web service
  • Xactions

Blog Archive

  • ▼  2013 (24)
    • ▼  December (2)
      • Pentaho 5.0 Reporting by Example: Beginner’s Guide...
      • Going Agile: Test your Pentaho ETL transformations...
    • ►  November (3)
    • ►  October (2)
    • ►  September (1)
    • ►  August (3)
    • ►  July (2)
    • ►  June (1)
    • ►  May (2)
    • ►  April (1)
    • ►  March (3)
    • ►  February (1)
    • ►  January (3)
  • ►  2012 (20)
    • ►  November (3)
    • ►  October (3)
    • ►  August (1)
    • ►  June (1)
    • ►  April (1)
    • ►  March (3)
    • ►  February (5)
    • ►  January (3)
  • ►  2011 (19)
    • ►  November (3)
    • ►  July (2)
    • ►  June (1)
    • ►  May (4)
    • ►  April (2)
    • ►  March (1)
    • ►  February (3)
    • ►  January (3)
  • ►  2010 (17)
    • ►  December (1)
    • ►  November (6)
    • ►  September (1)
    • ►  August (1)
    • ►  June (2)
    • ►  May (1)
    • ►  April (3)
    • ►  February (1)
    • ►  January (1)
  • ►  2009 (18)
    • ►  December (3)
    • ►  November (1)
    • ►  October (5)
    • ►  September (7)
    • ►  July (2)
Powered by Blogger.

About Me

Unknown
View my complete profile