Scot-Cloud 2014

Scot-Cloud 2014

Napier University CraiglockhartMillersoft had the pleasure of attending the Scot-Cloud 2014 event at the Craiglockhart Campus of Napier University.

The all day agenda contained 4 sessions attended by industry leaders and notable guest speakers. Cloud computing workshops, breakout sessions, case studies, Q&A’s, networking, free coffee, and a panel discussion all made it a worthwhile and enjoyable day out of the office.

We look forward to attending the 2015 event.

Pentaho Partner Summit 2013

Pentaho logo
Millersoft was one of over 40 Pentaho Partners from around the world attending this Pentaho Partner Summit (PPSUMM13) at the Tivoli Hotel in Sintra.

 

Pentaho Community Meeting 2013Pentaho Community Meetup

The sixth edition of the Pentaho Community Meeting (PCM13) was held in the beautiful surroundings of Pena Palace’s Gardens just outside of Lisbon, Portugal. We enjoyed our time networking and seeing the workshops and slideshows with the rock starts of Pentaho and key technology partners.

 

Community Meetup 2013

 

Pena Palace

Thanks for the warm welcomes from the folks at Webdetails and Pentaho and for organising an entertaining event.

Grab a seat while you can

 

Business Intelligence New Zealand

Millersoft AsiaPacific opens a new office providing Business Intelligence in New Zealand

Business Intelligence in Auckland, New Zealand

As part of our overseas expansion, Millersoft AsiaPacific is now located in Auckland providing Pentaho open source Business Intelligence consulting and services to clients throughout the region.

Auckland is New Zealand’s largest city and its main hub airport has regular connections to other cities in New Zealand and Sydney, Melbourne and Brisbane in Australia which are only a few hours away.

Steve Graham is our contact in Auckland and he brings a wealth of experience in developing software and delivering solutions for clients. He has previously worked on global projects for GE-Reinsurance, Cisco Systems, Vodafone and has recently been working with the other Millersoft team members in Edinburgh on the lewis group project.

To contact Steve, email steve@millersoftltd.com.

The Lewis Group banks on Pentaho to drive performance gains

Leading UK debt collection agency opts for commercial open source-based reporting and data integration to create resource capacity and improve performance management as the business expands

LONDON – July 19, 2012 – Delivering the future of business analytics, Pentaho Corporation today announced that leading UK debt collection agency, the lewis group, has chosen to implement the integrated Pentaho Business Analytics and Data Integration platform as part of a larger technology refresh designed to support its growth plans.  In addition to its exceptional data integration and visualisation tools, Pentaho was chosen for its commercial open source model, which enables the company to manage IT spend and establish a modern, scalable architecture as it rolls out new performance management dashboards. The lewis group engaged Pentaho partner Millersoft to support the implementation.

The lewis group, which handles debt collection for clients in sectors as diverse as finance, retail, insurance, education and central and local government, is currently implementing the data integration, reporting and dashboard capabilities of Pentaho Business Analytics to deliver performance management dashboards to help improve operational efficiency:

  • Key client ‘RAG’ reports - RAG (red / amber / green) reports present information about how the lewis group is performing for its key clients against budget expectations. The reports present KPIs (key performance indicators) that show, for example, how much, how fast and how effectively it is able to collect debt on specific clients’ behalf. Each KPI is presented in a dashboard, with a red, amber or green light, providing an early warning system so that problems can be fixed before they escalate. Currently, RAG reports are live and rolled out to 15 senior managers.
  • Departmental reports - these reports are to be based on aggregated client data and will run at a departmental level (e.g., by office or team). They will measure general business performance.
  • Individual performance scorecards - these will measure individual performance for employees like debt collectors, who have specific targets for collection and quality each month and paid bonuses against those targets.

Millersoft, Pentaho’s top-performing Gold partner in the UK, is delivering and supporting the reporting dashboards, which will be rolled out through the second half of 2012. Millersoft is also handling other aspects of the implementation, including the construction of a new corporate data warehouse that uses an innovative ‘data vault’ approach. This new data warehouse is designed to make it easier for the lewis group to manage operational data from multiple sources as business requirements change and grow. Millersoft is using Pentaho Data Integration to move data from the lewis group’s operational systems into the data vault.

Quotes and Multimedia

Howard Bethell, Change Director, the lewis group commented, “Our customer base is expanding rapidly so we had the choice of either throwing more people at our current reporting system or modernising our systems to create extra capacity. Pentaho’s integrated business analytics and data integration software and its commercial open source model made it technically and economically viable to choose a modern, more sustainable approach.”

Davy Nys, VP EMEA & APAC, Pentaho said, “More and more UK financial services companies like the lewis group are discovering the financial and technical advantages of our modern, end-to-end business analytics platform. We are very pleased to be helping to support the company’s expansion by enabling it to deliver fast, flexible and actionable performance management reports.”

Calum Miller, managing director, Millersoft added, “Pentaho Data Integration was the perfect tool for moving data from the lewis group’s operational systems into the new corporate data warehouse we built using the ‘data vault’ approach, which we felt provided the best flexibility and resilience to handle the company’s future requirements.”

Learn about the advantages of Pentaho Business Analytics

About the lewis group
The lewis group is one of the UK’s biggest and best performing collections businesses, achieving excellent recovery and market-leading compliance. The 40 year-old group provides collection, investigation, litigation and tracing services to some of the UK’s leading private and public sector organisations.

About Millersoft
Millersoft, based in Edinburgh has been working with open source business intelligence software for five years and delivers solutions to companies across Europe including; HouseTrip, Air Menzies International, Regenersis and the lewis group. Millersoft can deliver complete business intelligence solutions or train and mentor existing staff. Millersoft specialises in Pentaho and is its number one Gold Partner in the UK. For more information, visit www.millersoft.ltd.uk.

About Pentaho Corporation
Pentaho is delivering the future of business analytics. Pentaho’s open source heritage drives our continued innovation in a modern, integrated, embeddable platform built for the future of analytics, including diverse and big data requirements. Powerful business analytics are made easy with Pentaho’s cost-effective suite for data access, visualisation, integration, analysis and mining. For a free evaluation, download Pentaho Business Analytics at pentaho.com/get-started.

Download the press release

Read the Use Case Overview

Magic Pivot Tables

This short video explains how business users can create powerful Pivot Tables using Pentaho, free of IT involvement.

Watch movie

pentaho analyser pivot tables

Millersoft and Pentaho at Internet Retailing Expo

internet retailing expo

Millersoft Ltd and Pentaho will exhibit at the Internet Retailing Expo on the 23rd of March 2011, Birmingham NEC. This exhibition includes the biggest players in internet retailing and we look forward to sharing our experiences using Pentaho with the delegates. Shortly before the event we will be conducting a retail focused webinar, where we will show Business Intelligence users how to combine customer and order data to target different market segments.

Pentaho and MS Dynamics Navision

pentaho-dynamics-navision

This video represents the outcome of a proof of concept, showing invoice line items from within Dynamics Navision visible within the Pentaho Analyser:

Watch movie

Millersoft interviewed in Sunday Telegraph BI section

Millersoft Director, Calum Miller, was interviewed on Open Source Business Intelligence in the Media Planet supplement of the Sunday Telegraph.

Read article

sunday-telegraph-millersoft

Butler BI

butler bi

Millersoft and Pentaho are gold sponsors of the 2010 Butler BI conference.

Millersoft has been working with Open Source Business Intelligence software for 5 years and delivers solutions to companies across Europe including; Skype, Air Menzies International and Regenersis.

Millersoft can deliver complete Business Intelligence solutions or train and mentor existing staff. We have particular experience working with ERP systems like Lawson/Movex, Ofbiz and Oracle EBS. However, Millersoft also helps companies build BI systems from scratch using any source of data.

Millersoft supports the full Pentaho stack, with long experience delivering integrated dashboards, reports and add-hoc OLAP analysis. We also have niche expertise integrating Excel Pivot Tables with Pentaho and running software within the Amazon cloud.

Millersoft also has extensive database experience and can advise clients on the best choice of database to meet the number crunching needs of high volume analytics. Currently, our company is exploring Hadoop integration within the Pentaho suite.

Within the space of 6 months, Millersoft has become a key Pentaho partner, selling and supporting Open Source solutions across different industry sections including; logistics, telecoms and retail.

Millersoft to attend Pentaho Partner Conference in Lisbon

Millersoft Ltd will be attending the Pentaho Partner Emea Conference 2010 on September the 23rd. Millersoft Director, Calum Miller, will travel to Lisbon to meet with the following from the Pentaho Corporation:

Matt Casters, Founder, Project Lead, Pentaho Data Integration (ETL)
Richard Baldwin, Global Director of Channels
Julian Hyde, Mondrian Founder, Project Lead, Pentaho Analysis

1607px-Vasco_da_Gama_bridge_panorama

Pentaho Data Integration 4.0

Pentaho Data Integration 4.0 breaks the Business Intelligence mold by successfully combining data transformation with data interrogation in one seamless product.

Use the graphical tool to pull data from operational data stores (databases, flat files, spreadsheets, web services), transform and link records with pre-defined components then view results using the auto-generated Pivot Table. If you like the results then just publish for all to see on the Pentaho BI Server. Simple!

Watch the Pentaho Data Integration Movie

Excel with Mondrian, Just Crunch IT

Introduction to using Excel with Mondrian covering the key features and benefits. This demonstration uses the Greenplum database to hold the cubes but virtually any database can be used. Millersoft Ltd is happy to help companies with cube design, ETL as well as Mondrian optimisation and configuration.

Open Source Resources

Library

 

Make sure you register for our twitter feed as we will be posting regular open source reviews and demonstrations in our resource centre.

Open source OLAP

cube

At the heart of both Pentaho and Jaspersoft is powerful open source OLAP engine called Mondrian, which enables OLAP capabilities on top of any database. Using this tool you can create Pivot Tables to splice and dice terabytes of data without the huge costs associated with high end businesses intelligence applications like Cognos or Microstrategy.

Millersoft Ltd has long experience with Mondrian, running against both small and huge databases. If you need help realising its capabilities then give us a call.

Greenplum under VMWare

“This new, free version of Greenplum Database gives data analysts access to Greenplum’s high-performance database for large-scale analytical projects outside the enterprise data warehouse (EDW). The Single-Node Edition is a state-of-the-art parallel analytic database, and can participate as a distributed node of Greenplum’s Enterprise Data Cloud — allowing centralized management, data discovery and data sharing across databases.”

The new stand-a-lone version should be a boon for anyone using Postgres for OLAP as it will allow the database to utilise every CPU core. What follows are brief notes to enable users to create a VMWare build running with Jaspersoft (email for futher help or assistance and I’ll update this post):

You can download a copy from:

http://www.greenplum.com/community/downloads/

I used the Red Hat Enterprise Linux 5.x / CentOS 5.x (x86 – 64bit) version

wget

You can download a virgin copy of VMWare CentOS 5 here using a torrent client:

http://torrents.thoughtpolice.co.uk/centos-5.3-x86_64-server.zip.torrent

Remember to open ports 5432 and 8080 during the install

You need to increase your CPU and Memory settings for this VMWare appliance otherwise you will run out of resources

Create a linux user called gpadmin and set this user up so that they can login on the same box without a password, otherwise the installation process will ask you to login 100 times:

http://www.fnode.com/2009/09/how-to-enable-ssh-key-authentication-ssh-login-without-a-password/

Add your hostname to /etc/sysconfig/network

Mine is greenplumx

Add your IP address to /etc/hosts

Mine is 172.16.245.133  greenplumx

Restart networking

Follow the installation instructions for greenplum, a copy of my gp_init_config file is listed below

You can install the standard open source version of jaspersoft (jasperserver-3.5.0-linux-installer.bin) direct from source forge. You will need to run this against the default Mysql database for configuration as Greenplum has DML restrictions. Once installed you can then add in the Greenplum data source and start your analysis. Use this ant task to load the FoodMart demo data into Greenplum:

<target name=”CopyFoodmartFromFile”
description=”Runs a few queries.” >
<java classpathref=”project.classpath” classname=”mondrian.test.loader.MondrianFoodMartLoader” fork=”no”>
<arg value=”-verbose” />
<arg value=”-indexes” />

<arg value=”-jdbcDrivers=org.postgresql.Driver” />
<arg value=”-inputFile=FoodMartCreateData.sql” />
<arg value=”-outputJdbcURL=jdbc:postgresql://172.16.245.133/foodmart” />
<arg value=”-outputJdbcUser=gpadmin” />
<arg value=”-outputJdbcPassword=YOURPASSWORD” />
</java>
</target>

Copy of my gp_init_config file

# FILE NAME: gp_init_config

# A configuration file is needed by the gpinitsystem
# script to tell it how to configure the master and segment
# instances in your Greenplum Database system. This file can be named
# whatever you like, and is referenced when you run gpinitsystem.

################################################
# REQUIRED PARAMETERS
################################################

# A name for the array you are configuring. You can use any name you
# like. Enclose the name in quotes if the name contains spaces.

ARRAY_NAME=”Greenplum Database”

# This specifies the file that contains the list of segment host names
# that comprise the Greenplum system  the master host is assumed to
# be the host from which you are running the script. If the host list
# file does not reside in the same directory where the gpinitsystem
# script is executed, specify the absolute path to the file.

MACHINE_LIST_FILE=/home/gpadmin/single_host_file

# This specifies a prefix that will be used to name the data directories
# of the master and segment instances. The naming convention for data
# directories in a Greenplum Database system is SEG_PREFIX<number>
# where <number> starts with 0 for segment instances and the master
# is always -1. So for example, if you choose the prefix gp, your
# master instance data directory would be named gp-1, and the segment
# instances would be named gp0, gp1, gp2, gp3, and so on.

SEG_PREFIX=gp

# Base port number on which primary segment instances will be
# started on a segment host. If a host has multiple primary segment
# instances, the base port number will be incremented by one for each
# additional segment instance started on that host.

PORT_BASE=50000

# This specifies the data storage location(s) where the script will
# create the primary segment data directories. The script creates a
# unique data directory for each segment instance. If you want multiple
# segment instances per host, list a data storage area for each primary
# segment you want created. The recommended number is one primary segment
# per CPU. It is OK to list the same data storage area multiple times
# if you want your data directories created in the same location. The
# number of data directory locations specified will determine the number
# of primary segment instances created per host.
# You must make sure that the user who runs gpinitsystem (for example,
# the gpadmin user) has permissions to write to these directories. You
# may want to create these directories on the segment hosts before running
# gpinitsystem and chown them to the appropriate user.

declare -a DATA_DIRECTORY=(/dbfast1 )
# declare -a DATA_DIRECTORY=(/gp_primary /gp_primary)

# The host name of the Greenplum Database master instance.

MASTER_HOSTNAME=greenplumx

# The location where the data directory will be created on the
# Greenplum master host.
# You must make sure that the user who runs gpinitsystem
# has permissions to write to this directory. You may want to
# create this directory on the master host before running
# gpinitsystem and chown it to the appropriate user.

MASTER_DIRECTORY=/master

# The port number for the master instance. This is the port number
# that users and client connections will use when accessing the
# Greenplum Database system.

MASTER_PORT=5432

# The shell the gpinitsystem script uses to execute
# commands on remote hosts. Allowed value is ssh. You must set up
# your trusted host environment before running the gpinitsystem
# script. You can use gpssh-exkeys to do this.

TRUSTED_SHELL=ssh

# Maximum distance between automatic write ahead log (WAL)
# checkpoints, in log file segments (each segment is normally 16
# megabytes). This will set the checkpoint_segments parameter
# in the postgresql.conf file for each segment instance in the
# Greenplum Database system.

CHECK_POINT_SEGMENTS=8

# The character set encoding to use. Greenplum supports the
# same character sets as PostgreSQL. See ‘Character Set Support’
# in the PostgreSQL documentation for allowed character sets.
# Should correspond to the OS locale specified with the
# gpinitsystem -n option.

ENCODING=UNICODE

################################################
# OPTIONAL PARAMETERS
################################################

# Optional. Uncomment to create a database of this name after the
# system is initialized. You can always create a database later using
# the CREATE DATABASE command or the createdb script.

DATABASE_NAME=warehouse

################################################
# OPTIONAL PARAMETERS FOR SEGMENT MIRRORING
################################################

# Uncomment these parameters to set up mirroring at initialization.
# If you are using multiple network interfaces per segment host,
# do NOT set up mirrors at initialization time. Do so afterwards
# using gpaddmirrors

# The base port number on which mirror segment instances will be
# started on a segment host. If a host has multiple mirror segment
# instances, the base port number will be incremented by one for
# each additional mirror segment instance started on that host.
# Be sure to use a different number than the primary PORT_BASE.

#MIRROR_PORT_BASE=60000

# The data directory where mirror segment instances will be
# created on a host. There must be the same number of data directories
# declared for mirror segment instances as for primary segment instances
# (see the DATA_DIRECTORY parameter for more information).

# You must make sure that the user who runs gpinitsystem
# has permissions to write to these directories. You may want to
# create these directories on the segment hosts before running
# gpinitsystem and chown them to the appropriate user.

#declare -a MIRROR_DATA_DIRECTORY=(/dbfast3 /dbfast4)
#declare -a MIRROR_DATA_DIRECTORY=(/gp_mirror /gp_mirror)