[Recap] NCOAUG 2018 Winter Training Day

Download ennVee’s presentation and infographic from the 2018 NCOAUG Training Day conference.


Another year, another solid NCOAUG Training Day. Thanks to everyone who came out to Drury Lane and connected with us.


ennVee President, Veera Venugopal, and VP of Global Sales, Joe Bong, hosted an afternoon session to cover the voice of the customer, as well as ways to leverage automation during the EBS R12.2/ERP upgrade process. The session, entitled “Demystifying Customization During Your EBS R12.2.x Upgrade”, can be downloaded here.


Presentation: Demystifying Customization During Your EBS R12.2.x Upgrade

93% of the companies using Oracle EBS are leveraging some form of customization, the scope of which can range from simple changes to seeded Oracle functionality to complex integrations to other systems.  Upgrading to EBS R12.2 requires manual remediation of all custom objects, and the management and execution of this remediation process is the key to an efficient and successful upgrade.

There is no one-size-fits-all solution for a successful upgrade, but there are ways in which companies can leverage automated tools and practices to significantly reduce manual effort, downtime, cutover, and post-go-live errors when upgrading to R12.2.x.

If you’re wondering how your specific level of customization will impact your upgrade, join us for this session to hear the “voice of the customer”; a summary of the top-rated industry-wide challenges, objectives, and overall timelines of those that have or plan to upgrade to R12.2.  Learn ennVee’s recipe for a more effective EBS R12.2 upgrade, and how IT leaders have leveraged automation to shrink development time and consulting spend by 80%.

Regardless of where you are in the upgrade process, this session will help you set realistic expectations for your upgrade and allow you to build a better project plan, baking automation into each phase which will result in a reduction of post-go-live errors, testing cycles and downtime.

Bonus Infographic Download

ennVee partnered with an independent research firm in 2017 to collect survey responses from more than 500 IT leaders on the challenges, business objectives, project timelines, and levels of customization involved when upgrading to Oracle E-Business Suite R12.2.

Click here to download the infographic.

Click here to download the presentation.

Click here to learn more about R12.2 upgrade automation.

Contact us for more information.

Connect with us at these upcoming events

March 20
Oracle Code 2018 Chicago 
(Chicago, IL)

April 10-12
Oracle Modern Customer Experience (Chicago, IL)

May 16
2018 Great Lakes Oracle Conference (GLOC) (Cleveland, OH)

Migrating from Oracle Discoverer to Microsoft SSRS

A Private, Women’s Liberal Arts College Migrates from Oracle Discoverer to MS SSRS in 8 weeks.

Retiring legacy reporting tools

The client is an internationally-recognized, private, women’s liberal arts college located in Massachusetts. As an IT-savvy institute they use Oracle Discoverer for all MIS requirements. After Oracle stopped supporting Discoverer in mid-2017, they sought an optimal migration path to migrate their analytical reports to a more intuitive platform, ultimately selecting Microsoft SQL Server Reporting Services (MS SSRS) to replace Discoverer. Continue reading “Migrating from Oracle Discoverer to Microsoft SSRS”

An Automotive Parts Manufacturer’s Digital Vision Quest

Crafting an interactive digital portal for the partner, customer, and automotive aftermarket ecosystem at large.

Going Digital

This US-based global manufacturer leads the automotive aftermarket in the research, development, manufacturing, testing, and supply of brake system components. Over time they amassed a diverse portfolio of parts and products through acquisitions of other leading parts suppliers and most importantly, continuous innovation in the automotive aftermarket space. Continue reading “An Automotive Parts Manufacturer’s Digital Vision Quest”

NCOAUG 2018 Training Day

Connect with ennVee at the annual NCOAUG Training Day on March 9th at Drury Lane in Oakbrook Terrace, IL.

The annual NCOAUG winter Training Day will be held on Friday, March 9th at the Drury Lane Theatre in Oakbrook Terrace, Illinois. This is a great opportunity for Oracle users to network and catch up on the latest Oracle cloud, applications, and IT industry trends through a mix of educational sessions and discussion panels.

The North Central Oracle Applications User Group (NCOAUG) is part of the national Oracle Applications User Group (OAUG), and has been providing outstanding Oracle education for the seven state region including Illinois, Iowa, Wisconsin, Indiana, Michigan, Minnesota and Ohio, since 1994.

This year, ennVee has joined as one of the conference sponsors and also submitted an abstract for a chance to present under the Oracle E-Business Suite learning track. Veera Venugopal (President), and Joe Bong (Vice President, Global Sales & Marketing) anticipate speaking about automation best practices for Oracle E-Business Suite R12.2 upgrades, as well as a summation of the results from our 2017 IT leadership survey.

We look forward to meeting new and familiar faces in the Midwest. Make sure to stop by our booth to meet the ennVee team.


Register to attend and learn more by visiting the NCOAUG website.

Dealing With Unsanitized Data

The amount of data in the world is growing rapidly every day — but this data needs to be analyzed correctly. Check out what you need to consider when dealing with unsanitized data.

The amount of data in the world is growing rapidly every day — but this data needs to be analyzed correctly. Check out what you need to consider when dealing with unsanitized data.

By: Arun Yaligar


Big data is not just a buzzword. It is indeed a very important concept with a considerable impact on business in general. Big data is a vast collection of various kinds of structured and unstructured data gathered from inner and outer resources which, after processing and analyzing, can be turned into valuable insights. Conventional database techniques can’t be applied to big data processing. In today’s information- and technology-dependent world, there is a burning need for new effective techniques to handle data and make most out of it. Real-time data collection provides us with the opportunity to know about customer preferences in real-time. Big data enables the segmentation of customers, a customized approach, and the ability to target the audience more precisely and in a more well-prepared way.

Challenges Faced When Using Unsanitized Data

First of all, all that data needs to be analyzed correctly. The following points are important to consider when dealing with unsanitized data.

  • Identifying the correct filters is crucial. The amount of information is overflowing but not all of it is relevant or useful. Not all of the available information needs to or should be ingested and processed. Setting the right filters so that you don’t miss the important data will determine the ultimate success of the analysis.
  • Large amounts of extraordinary data call for big data environments because traditional data computing and processing won’t do here. For efficient analysis, big data processing should be performed automatically. Big data startups should elaborate an appropriate approach to storing and structuring information in the most efficient way.
  • Automatically produce metadata to enhance research, while still keeping in mind that computer systems may have defects and cause false results.
  • There is an urgent need for qualified people who can handle, analyze, and structure data. Innovation is progressing very quickly, and information is streamed from multiple resources. Developing a smart approach to prioritizing and processing big data is vital, though it is quite difficult to find people who possess the right skills.
  • Big data startups should consider privacy and security issues, as well.

Best Practices to Avoid Dealing With Unsanitized Data

Let’s look at potential solutions for challenges involving the three Vs — data volume, variety, and velocity — as well as privacy, security, and quality.

Potential Solutions for Data Volume Challenges

Let’s talk about Hadoop, visualization, robust hardware, grid computing, and Spark.


Tools like Hadoop are great for managing massive volumes of structured, semi-structured, and unstructured data. As it is a new technology, many professionals are unfamiliar with Hadoop, and using it requires a lot of learning. This eventually diverts the attention from solving the main problem towards learning Hadoop.


Visualization is another way to perform analyses and generate reports, but sometimes, the granularity of data increases the problem of accessing the level of detail needed.

Robust Hardware

It is also a good way to handle volume problems. It enables increased memory and powerful parallel processing to chew high volumes of data swiftly.

Grid Computing

Grid computing is represented by a number of servers that are interconnected by a high-speed network; each of the servers plays one or many roles.


Platforms like Spark use a model plus in-memory computing to create huge performance gains for high-volume and diversified data. All these approaches allow firms and organizations to explore huge data volumes and get business insights. There are two possible ways to deal with the volume problem. We can either shrink the data or invest in good infrastructure to solve the problem of data volume, and based on our budget and requirements, we can select the most appropriate technology or method.

Potential Solutions for Data Variety Challenges

Let’s look at OLAP tools, Apache Hadoop, and SAP HANA.

OLAP (Online Analytical Processing) Tools

Data processing can be done using OLAP tools to establish connections between information and assemble data logically in order to access it easily. OLAP tools specialists can quickly process high-volume data. One drawback is that OLAP tools process all the data provided to them regardless of the data’s relevancy.

Apache Hadoop

Hadoop is an open-source software whose main purpose is to manage huge amounts of data in a very short amount of time with great ease. The functionality of Hadoop is to divide data among multiple systems infrastructure for processing it. A map of the content is created in Hadoop so it can be easily accessed and found.


SAP HANA is an in-memory data platform that is deployable as an on-premise appliance or in the cloud. It is a revolutionary platform that’s best suited for performing real-time analytics as well as developing and deploying real-time applications. New database and indexing architectures make sense of disparate data sources swiftly.

Potential Solutions for Velocity Challenges

Let’s talk about flash memory, transactional databases, and cloud hybrid models.

Flash Memory

Flash memory is needed for caching data, especially in dynamic solutions that can parse that data as either hot (highly accessed data) or cold (rarely accessed data).

Transactional Databases

According to Tech-FAQ, “A transactional database is a database management system that has the capability to roll back or undo a database transaction or operation if it is not completed appropriately.” They are equipped with real-time analytics to provide a faster response to decision-making.

Cloud Hybrid Models

Expanding the private cloud using a hybrid model uses less additional computational power for data analysis and helps select hardware, software, and business process changes to handle high-pace data needs.

Potential Solutions for Quality Challenges

Let’s talk about data visualization and big data algorithms.

Data Visualization

If the data quality is the concern, visualization is effective because it lets us see where outliers and irrelevant data lie. For quality, firms should have a data control, surveillance, or information management process active to ensure that the data is clean. Plotting data points on a graph for analysis becomes difficult when dealing with an extremely large volume of data or data with a wide variety of information. One way to resolve this is to cluster data into a higher-level view where smaller clusters or bunches of data become visible. By grouping the data together, or “binning,” you can more effectively visualize the data.

Big Data Algorithms

Data quality and relevance are not new concerns. It’s been a concern ever since we started dealing with data and how to store every piece of data a firm produces. It is too expensive to have dirty data and it costs companies hundreds of billions of dollars every year. In addition to being perfect for maintaining, managing, and cleaning data, big data algorithms can be an easy way to clean the data. There are many algorithms and models and we can also make our own algorithms to act on data.

Potential Solutions for Privacy and Security Challenges

Let’s talk about examing cloud providers, having an adequate access control policy, and protecting data.

Examine Your Cloud Providers

Storing big data in the cloud is a good way of storage. But along with this, we need to take care of its protection mechanisms. We should make sure that our cloud provider has frequent security audits and has a disclaimer that includes paying penalties in case adequate security standards have not met.

Must Have an Adequate Access Control Policy

Create policies in such a way that allows access to authorized users only.

Protect the Data

All stages of data should be adequately protected from the raw data. There should be encryption to ensure that no sensitive data is leaked. The main solution to ensure that data remains protected is the adequate use of encryption. For example, attribute-based encryption ( a type of public-key encryption in which the secret key of a user and the ciphertext are dependent upon attributes) provides access control of encrypted data.


Everything has two sides. Opportunities and challenges are everywhere. Threats should be considered and not neglected.

We use different techniques for big data analysis including statistical analysis, batch processing, machine learning, data mining, intelligent analysis, cloud computing, quantum computing, and data stream processing. There is a great future for the big data industry and lots of scope for research and improvements.



Arun Yaligar is a MuleSoft Developer at ennVee. 

This article was originally posted to The Integration Zone