• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

26 September 2017

Oracle Sparc M8 and Oracle Advanced Analytics

Oracle SPARC M8 released with 32 cores 256 threads 5.0GHz

Oracle announced its eighth generation SPARC platform, delivering new levels of security capabilities, performance, and availability for critical customer workloads. Powered by the new SPARC M8 microprocessor, new Oracle systems and IaaS deliver a modern enterprise platform, including proven Software in Silicon with new v2 advancements, enabling customers to cost-effectively deploy their most critical business applications and scale-out application environments with extreme performance both on-premises and in Oracle Cloud.

Oracle’s Advanced Analytics & Machine Learning 12.2c New Features & Road Map; Bigger, Better, Faster, More!

SPARC M8 processor-based systems, including the Oracle SuperCluster M8 engineered systems and SPARC T8 and M8 servers, are designed to seamlessly integrate with existing infrastructures and include fully integrated virtualization and management for private cloud. All existing commercial and custom applications will run on SPARC M8 systems unchanged with new levels of performance, security capabilities, and availability. The SPARC M8 processor with Software in Silicon v2 extends the industry's first Silicon Secured Memory, which provides always-on hardware-based memory protection for advanced intrusion protection and end-to-end encryption and Data Analytics Accelerators (DAX) with open API's for breakthrough performance and efficiency running Database analytics and Java streams processing. Oracle Cloud SPARC Dedicated Compute service will also be updated with the SPARC M8 processor.

Spark SQL: Another 16x Faster After Tungsten: Spark Summit East talk by Brad Carlile

"Oracle has long been a pioneer in engineering software and hardware together to secure high-performance infrastructure for any workload of any size," said Edward Screven, chief corporate architect, Oracle. "SPARC was already the fastest, most secure processor in the world for running Oracle Database and Java. SPARC M8 extends that lead even further."

The SPARC M8 processor offers security enhancements delivering 2x faster encryption and 2x faster hashing than x86 and 2x faster than SPARC M7 microprocessors. The SPARC M8 processor's unique design also provides always-on security by default and built-in protection of in-memory data structures from hacks and programming errors.

SPARC M8's silicon innovation provides new levels of performance and efficiency across all workloads, including: 
  • Database: Engineered to run Oracle Database faster than any other microprocessor, SPARC M8 delivers 2x faster OLTP performance per core than x86 and 1.4x faster than M7 microprocessors, as well as up to 7x faster database analytics than x86.
  • Java: SPARC M8 delivers 2x better Java performance than x86 and 1.3x better than M7 microprocessors.  DAX v2 produces 8x more efficient Java streams processing, improving overall application performance.
  • In Memory Analytics: Innovative new processor delivers 7x Queries per Minute (QPM)/core than x86 for database analytics.
Oracle is committed to delivering the latest in SPARC and Solaris technologies and servers to its global customers. Oracle's long history of binary compatibility across processor generations continues with M8, providing an upgrade path for customers when they are ready. Oracle has also publicly committed to supporting Solaris until at least 2034.

Oracle Sparc M8 is available for:

  • Oracle SPARC M8
  • Oracle SPARC T8-1 server
  • Oracle SPARC T8-2 server
  • Oracle SPARC T8-4 server
  • Oracle SPARC M8-8 server
  • Oracle SuperCluster M8 engineered system

More information in: Oracle SPARC M8 Launch Webcast:  http://www.oracle.com/us/corporate/events/next-gen-secure-infrastructure-platform/index.html

About Oracle 

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE: ORCL), please visit us at oracle.com.

Big data analytics using oracle advanced analytics and big data sql

The Oracle SPARC M8 is now out and is a monster of a chip. Each SPARC M8 processor supports up to 32 cores and 64MB L3 cache. Each core can handle 8 threads for up to 256 threads. Compare this to the AMD EPYC 7601, the world’s only 32 core x86 processor as of this writing, which handles 64 threads and also has 64MB L3 cache. The cores can also clock up to 5.0GHz faster than current x86 high-core count server chip designs from Intel and AMD. That is quite astounding given the SPARC M8 is still using 20nm process technology.

Beyond simple the simple core specs, there is much more going on. Oracle has specific accelerators for cryptography, JAVA performance, database performance and ETC. For example, there are 32 on-chip Data Analytics Accelerator (DAX) engines. DAX engines offload query processing and perform real-time data decompression. Oracle’s software business for the Oracle Database line is still strong and these capabilities are what is often referred to as “SQL in Silicon.” Oracle claims that Oracle Database 12c is up to 7 times faster by using M8 with DAX than competing CPUs. That is a big deal for software licensing costs. Another interesting feature is the inline decompression feature allows decompression of data stored in memory with no claimed performance penalty.

Oracle SPARC M8 Processor Key Specifications

Here are the key specs for the new Oracle SPARC CPUs:

  • 32 SPARC V9 cores, maximum frequency: 5.0 GHz
  • Up to 256 hardware threads per processor; each core supports up to 8 threads
  • Total of 64 MB L3 cache per processor, 16-way set-associative and inclusive of all inner caches
  • 128 KB L2 data cache per core; 256 KB L2 instruction cache shared among four cores
  • 32 KB L1 instruction cache and 16 KB L1 data cache per core
  • Quad-issue, out-of-order integer execution pipelines, one floating-point unit, and integrated cryptographic stream processing per core
  • Sophisticated branch predictor and hardware data prefetcher
  • 32 second-generation DAX engines; 8 DAX units per processor with four pipelines per DAX unit
  • Encryption instruction accelerators in each core with direct support for 16 industry-standard cryptographic algorithms plus random-number generation: AES, Camellia, CRC32c, DES, 3DES, DH, DSA, ECC, MD5, RSA, SHA-1, SHA-3, SHA-224, SHA-256, SHA-384, and SHA-512
  • 20 nm process technology
  • Open Oracle Solaris APIs available for software developers to leverage the Silicon Secured Memory and DAX technologies in the SPARC M8 processor
  • On Solaris Support Until 2034

In the official Oracle SPARC M8 release, Oracle has a note that is a clear nod to its Organizationals changes (we mentioned in a recent Oracle server release.)

Oracle is committed to delivering the latest in SPARC and Solaris technologies and servers to its global customers. Oracle’s long history of binary compatibility across processor generations continues with M8, providing an upgrade path for customers when they are ready. Oracle has also publicly committed to supporting Solaris until at least 2034.

Oracle is clearly hearing from its customers about the mass layoffs of Solaris engineering teams.

New Oracle SPARC M8 Systems

There are five new SPARC V9 systems are available from Oracle today:

  • Oracle SPARC T8-1 server
  • Oracle SPARC T8-2 server
  • Oracle SPARC T8-4 server
  • Oracle SPARC M8-8 server
  • Oracle SuperCluster M8 engineered system

The Evolution and Future of Analytics

We live in a world where things around us are ever changing.

Measurement metrics are just in time, predictive and need a lot of augmented intelligence; however, we're developing more complex mind analytics when it comes to buying patterns.

This new type of analytics can give us insight into how the customer feels and what he or she experiences.

Oracle's Machine Learning & Advanced Analytics 12.2 & Oracle Data Miner 4.2 New Features

Thus, the availability of smart information will emerge.

In the future, you may walk into a store and find one or all of the below, which can be built as solutions:

a) A robot welcoming you and taking over to interact with you using connected back end and analytics.

b) Natural language or human analytics that can automatically read your mood to ultimately improve customer satisfaction.

c) Historical data about you as a customer to help up sell or cross sell products based on your interests.

d) Automatic analysis about what you're doing to bring near real-time context of data; this will enable the retailer to build a mobile based intuitive presence or no billing architecture.

e) A personal assistant model to better serve you as a customer, empowering retailers to provide solutions to unsure customers.

f) IN product or things analytics to provide information about the product that makes things intelligent through RFID, intelligent tagging, sensors etc.

g) Discounts/coupons based on mixing historical buying patterns; post purchase analytics.

h) Interactive dashboards that make augmented decisions about a few areas based on reviews; this would take expert reviews, phone calls, product management and more into account.

i) A store platform of grammar, syntax, semantics and data science grammar to create recurring patterns, challenges and build new solutions which are continuous in nature.

Based on the above, let's dive into different types of analytics available on the market. We'll look at how they will blend and intersect to develop more augmented applications for the future.

Insights into Real-world Data Management Challenges

1) Historical Analytics

This is the traditional analytics of business intelligence focused on analyzing stored data and reporting. We would build repositories and create analyses and dashboards for historical data. Solutions would include Oracle Business Intelligence.

2) Current Analytics 

Here the analytics is measurement over current process. For example, we would measure the effectiveness of a process as it happens (business activity monitoring) using a stream that processes arriving data and analyzes it in real-time.

3) Enterprise Performance Management

Here the objective is to focus on projections/what-if analysis with the current data and make projections for the future. An example would be a Hyperion or an EPM based solution which could help derive and plan reporting as projections. EPM today is also available as a cloud service.

4) Predictive Analytics

With the Big Data market growing, and with unstructured data adding parameters of velocity, variety and volume, the data world is moving on to more predictive analytics, with a blended mix of data. There is one world of data in the hadoop world and another in the classical data warehouse world. We can mix and match and do Big Data analytics.

Predictive analytics is more of a compass-like decision making with data analysis patterns. Oracle has an end-to-end Big Data solution from DW, Hadoop and analytics that can help develop predictive solutions.

MSE 2017 - Customer Voted Session: Rocketing Your Knowledge Management Success with Analytics

5) Prescriptive Analytics

To extend the predictive analytics, we would also develop systems to make decisions once we have the prediction; i.e. sending emails and connecting systems as the patterns are detected. This is the basics of building more heuristics systems to make decisions about arrived patterns.

6) Machine Analytics 

Every device and machine is going to generate data. Machine analytics is a blended form of data that can be embedded into the standard source to enhance and improve the overall data pattern. Oracle provides IOT CS as a solution to connect, analyze and integrate data from various machines and enrich new applications like  ERP, CRM and more.

Oracle Analytics and Big Data: Unleash the Value

7) AI Based Analytics

AI or deep learning is the next gen analytics pattern where we can train the systems or any entity to think and then embed the analytics pattern in the solution.

8) IORT / Robotics Analytics

With Robots / Bots and personal assistant complementing solutions, there are a lot of patterns of thinking and execution distributed to multiple systems. IORT or robotics analytics is a new branch that will focus on how we can analyze the pattern from semi thinking devices.

9) Data Science as Service 

A new branch where the analysis goes deeper in terms of algorithms and storage and is also more domain-driven. Even though data science is used as one branch in analytics, you will see a lot of analytics development. Data scientists who specialize in identifying patterns will go a long way to build patterns that are more replicable.

10) Integrated Analytics

In the future, we can form an integrated view of the above. This could be ONE IDE and you would derive patterns based on business need and use case. Today, we have a fragmented set of tools to manage analytics and it would slowly get integrated into one view.
Oracle has solution at different levels; most of them are also available as a cloud service (Software as a Service, Platform as a Service).

MSE 2017 - Advanced Analytics for Developers

It's imperative to build the right mix of solutions for the right problem and integrate these solutions.

  • Historical perspective you would use --> Business Intelligence 
  • Current processing  -->  Streaming (event processing) and Business Activity Monitoring
  • Enterprise performance management  --> Hyperion
  • Heterogeneous source of data and also large analysis of data --> Big Data Solution
  • Predictive and Prescriptive analytics --> R language and Advanced Analytics
  • Machine related --> IOT Solutions and Cloud Service

Oracle Architectural Elements Enabling R for Big Data

Oracle University provides competency solutions for all the above and empowers you with skill development and well-respected certifications that validate your expertise:

  • Big Data Analytics training
  • BI Data Analytics training
  • Hyperion training
  • Cloud PAAS Platform for Analytics and BI training

More Information:













Oracle Visual Analytics





22 August 2017

Oracle Database 12c Release 2

Oracle Database 12c Release 2 (12.2), is now available everywhere 

Ask Tom Answer Team (Connor McDonald and Chris Saxon) on Oracle Database 12c Release 2 New Features

Oracle Database 12.2c Architecture Diagram

The latest generation of the world's most popular database, Oracle Database 12c Release 2 (12.2), is now available everywhere - in the Cloud, with Oracle Cloud at Customer, and on-premises.  This latest release provides organizations of all sizes with access to the world’s fastest, most scalable and reliable database technology in a cost-effective, hybrid Cloud environment. 12.2 also includes a series of innovations that helps customers easily transform to the Cloud while preserving their investments in Oracle Database technologies, skills and resources.

Oracle RAC 12c Release 2 New Features

Database Security - Comprehensive Defense in Depth

Partner Webcast – Oracle Identity Cloud Service: Introducing Secure, On-Demand Identity Management

Oracle Database 12c provides multi-layered security including controls to evaluate risks, prevent unauthorized data disclosure, detect and report on database activities and enforce data access controls in the database with data-driven security. Oracle Database 12c Release 2 (12.2), now available in the Cloud and on-premises, introduces new capabilities such as on-line and off-line tablespace encryption and database privilege analysis. Combined with Oracle Key Vault and Oracle Audit Vault and Database Firewall, Oracle Database 12c provides unprecedented defense-in-depth capabilities to help organizations address existing and emerging security and compliance requirements.

Partner Webcast – Enabling Oracle Database High Availability and Disaster Recovery with Oracle Cloud

Database Cloud Services

Oracle Cloud provides several Oracle Cloud Service deployment choices. These choices allow you to start at the cost and capability level suitable to your use case and then gives you the flexibility to adapt as your requirements change over time. Choices include: single schemas, dedicated pluggable databases, virtualized databases, bare metal databases and databases running on world class engineered infrastructure.

The Oracle Exadata Cloud Service offers the largest most business-critical database workloads a place to run in Oracle Cloud. With all the infrastructure components including hardware, networking, storage, database and virtualization in place, access to secured, highly available and powerful performance is easily provisioned in a few clicks. Exadata Cloud Service is engineered to support OLTP, Data Warehouse / Real-Time Analytic and Mixed database workloads at any scale. With this service, you maintain control of your database while Oracle manages the hardware, storage and networking infrastructure letting you focus on growing your business.


Oracle Database Exadata Cloud Machine delivers the world’s most advanced database cloud to customers who require their databases to be located on-premises. Exadata Cloud Machine uniquely combines the world’s #1 database technology and Exadata, the most powerful database platform, with the simplicity, agility and elasticity of a cloud-based deployment. It is identical to Oracle’s Exadata public cloud service, but located in customers’ own data centers and managed by Oracle Cloud Experts. Every Oracle Database and Exadata feature and option is included with the Exadata Cloud Machine subscription, ensuring highest performance, best availability, most effective security and simplest management. Databases deployed on Exadata Cloud Machine are 100% compatible with existing on-premises databases, or databases that are deployed in Oracle’s public cloud. Exadata Cloud Machine is ideal for customers desiring cloud benefits but who cannot move their databases to the public cloud due to sovereignty laws, industry regulations, corporate policies, or organizations that find it impractical to move databases away from other tightly coupled on-premises IT infrastructure.

Oracle Database 12c Release 2 Sharded Database Overview and Install (Part 1)

Oracle Sharding Part 2

Oracle Sharding Part 3

Oracle Sharding with Suresh Gandhi

Overview of Oracle‘s Big Data Management System

As today's enterprises embrace big data, their information architectures are evolving. The new information architecture in the big data era embraces emerging technologies such as Hadoop, but at the same time leverages the core strengths of previous data warehouse architectures.

Partner Webcast – Oracle Ravello Cloud Service: Easy Deploying of Big Data VM on Cloud

The data warehouse, built upon Oracle Database 12c Release 2 and Exadata, will continue to be the primary analytic database for storing core transactional data: financial records, customer data, point- of-sale data and so forth (see Key Data Warehousing and Big Data Capabilities for more information).

However, the data warehouse will be augmented by a big-data system (built upon Oracle Big Data Appliance), which functions as a ‘data reservoir’. This will be the repository for the new sources of large volumes of data: machine-generated log files, social-media data, and videos and images -- as well as a repository for more granular transactional data or older transactional data which is not stored in the data warehouse.

Data flows between the big data system and the data warehouse to create a unified foundation: the Oracle Big Data Management System.

The transition from the Enterprise Data Warehouse centric architecture to the Big Data Management System - both on-premise, on the Cloud, or in hybrid Cloud systems - is going to revolutionize any companies information management architecture. Oracle's Statement of Direction outlines Oracle's vision for delivering innovative new technologies for building the information architecture of tomorrow.

Partner Webcast – Docker Agility in Cloud: Introducing Oracle Container Cloud Service

Big data is in many ways an evolution of data warehousing. To be sure, there are new technologies used for big data, such as Hadoop and NoSQL databases. And the business benefits of big data are potentially revolutionary. However, at its essence, big data requires an architecture that acquires data from multiple data sources, organizes and stores that data in a suitable format for analysis, enables users to efficiently analyze the data and ultimately helps to drive business decisions. These are the exact same principles that IT organizations have been following for data warehouses for years.

The new information architecture that enterprises will pursue in the big data era is an extension of their previous data warehouse architectures. The data warehouse, built upon a relational database, will continue to be the primary analytic database for storing much of a company’s core transactional data, such as financial records, customer data, and sales transactions. The data warehouse will be augmented by a big-data system, which functions as a ‘data lake’. This will be the repository for the new sources of large volumes of data: machine-generated log files, social- media data, and videos and images -- as well as the repository for more granular transactional data or older transactional data which is not stored in the data warehouse. Even though the new information architecture consists of multiple physical data stores (relational, Hadoop, and NoSQL), the logical architecture is a single integrated data platform, spanning the relational data warehouse and the Hadoop-based data lake.

Technologies such as Oracle Big Data SQL make this distributed architecture a reality; Big Data SQL1 provides data virtualization capabilities, so that SQL can be used to access any data, whether in relational databases or Hadoop or NoSQL. This virtualized SQL layer also enables many other languages and environments, built on top of SQL, to seamlessly access data across the entire big data platform.

Oracle Database 12c Release 2 and Oracle Exadata: A Data Warehouse as a Foundation for Big Data

Even as new big data architectures emerge and mature, business users will continue to analyze data by directly leveraging and accessing data warehouses. The rest of this paper describes how Oracle Database 12c Release 2 provides a comprehensive platform for data warehousing that combines industry-leading scalability and performance, deeply-integrated analytics, and advanced workload management – all in a single platform running on an optimized hardware configuration.

Hot cloning and refreshing PDBs in Oracle 12cR2


The bedrock of a solid data warehouse solution is a scalable, high-performance hardware infrastructure. One of the long-standing challenges for data warehouses has been to deliver the IO bandwidth necessary for large-scale queries, especially as data volumes and user workloads have continued to increase. While the Oracle Exadata Database Machine is designed to provide the optimal database environment for every enterprise database, the Exadata architecture also provides a uniquely optimized storage solution for data warehousing that delivers order-of- magnitude performance gains for large-scale data warehouse queries and very efficient data storage via compression for large data volumes. A few of the key features of Exadata that are particularly valuable to data warehousing are:

  • » Exadata Smarts Scans. With traditional storage, all database intelligence resides on the database servers. However, Exadata has database intelligence built into the storage servers. This allows database operations, and specifically SQL processing, to leverage the CPU’s in both the storage servers and database servers to vastly improve performance. The key feature is “Smart Scans”, the technology of offloading some of the data-intensive SQL processing into the Exadata Storage Server: specifically, row-filtering (the evaluation of where-clause predicates) and column-filtering (the evaluation of the select-list) are executed on Exadata storage server, and a much smaller set of filtered data is returned to the database servers. “Smart scans” can improve the query performance of large queries by an order of magnitude, and in conjunction with the vastly superior IO bandwidth of Exadata’s architecture delivers industry-leading performance for large-scale queries.
  • » Exadata Storage Indexes. Completely automatic and transparent, Exadata Storage Indexes maintain each column’s minimum and maximum values of tables residing in the storage server. With this information, Exadata can easily filter out unnecessary data to accelerate query performance.
  • » Hybrid Columnar Compression. Data can be compressed within the Exadata Storage Server into a highly efficient columnar format that provides up to a 10 times compression ratio, without any loss of query performance. And, for pure historical data, a new archival level of hybrid columnar compression can be used that provides up to 40 times compression ratios.

Partner Webcast - Oracle Cloud Machine Technical Overview (Part1)

Partner Webcast - Oracle Cloud Machine Technical Overview (Part 2)

Oracle Database In-Memory

While Exadata tackles one major requirement for high-performance data warehousing (high-bandwidth IO), Oracle Database In-Memory tackles another requirement: interactive, real-time queries. Reading data from memory can be orders of magnitude faster than reading from disk, but that is only part of the performance benefits of In-Memory: Oracle additionally increases in-memory query performance through innovative memory-optimized performance techniques such as vector processing and an optimized in-memory aggregation algorithm. Key features include:

  • » In-memory (IM) Column Store. Data is stored in a compressed columnar format when using Oracle Database In-Memory. A columnar format is ideal for analytics, as it allows for faster data retrieval when only a few columns are selected from a table(s). Columnar data is very amenable to efficient compression; in-memory data is typically compressed 2-20x, which enables larger volumes of raw data to be stored in the in-memory column store.
  • » SIMD Vector Processing. When scanning data stored in the IM column store, Database In-Memory uses SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. In this way, SIMD vector processing enables the Oracle Database In-Memory to scan and filter billion of rows per second.
  • » In-Memory Aggregation. Analytic queries require more than just simple filters and joins. They require complex aggregations and summaries. Oracle Database In-Memory provides an aggregation algorithm specifically optimized for the join-and-aggregate operations found in typical star queries. This algorithm allows dimension tables to be joined to the fact table, and the resulting data set aggregated, all in a single in-memory pass of the fact table.

Oracle Database In-Memory is useful for every data-warehousing environment. Oracle Database In-Memory is entirely transparent to applications and tools, so that it is simple to implement. Unlike a pure in-memory database, not all of the objects in an Oracle database need to be populated in the IM column store. The IM column store should be populated with the most performance-critical data, while less performance-critical data can reside on lower cost flash or disk. Thus, even the largest data warehouse can see considerable performance benefits from In- Memory.

Query Performance

Oracle provides performance optimizations for every type of data warehouse environment. Data warehouse workloads are often complex, with different users running vastly different operations, with similarly different expectations and requirements for query performance. Exadata and In-Memory address many performance challenges, but many other fundamental performance capabilities are necessary for enterprise-wide data warehouse performance.
Oracle meets the demands of data warehouse performance by providing a broad set of optimization techniques for every type of query and workload:

  • » Advanced indexing and aggregation techniques for sub-second response times for reporting and dashboard queries. Oracle’s bitmap and b-tree indexes and materialized views provide the developer and DBA’s with tools to make pre-defined reports and dashboards execute with fast performance and minimal resource requirements.
  • » Star query optimizations for dimensional queries. Most business intelligence tools have been optimized for star- schema data models. The Oracle Database is highly optimized for these environments; Oracle Database In- Memory provides fast star-query performance leverage its in-memory aggregation capabilities. For other database environments, Oracle’s “star transformation” leverages bitmap indexes on the fact table to efficiently join multiple dimension tables in a single processing step. Meanwhile, Oracle OLAP is a complete multidimensional analytic engine embedded in the Oracle Database, storing data within multidimensional cubes inside the database accessible via SQL. The OLAP environment provides very fast access to aggregate data in a dimensional environment, in addition to sophisticated calculation capabilities (the latter is discussed in a subsequent section of this paper).
  • » Scalable parallelized processing. Parallel execution is one of the fundamental database technologies that enable users to query any amount of data volumes. It is the ability to apply multiple CPU and IO resources to the execution of a single database operation. Oracle’s parallel architecture allows any query to be parallelized, and Oracle dynamically chooses the optimal degree of parallelism for every query based on the characteristics of the query, the current workload on the system and the priority of requesting user.
  • » Partition pruning and partition-wise joins. Partition pruning is perhaps one of the simplest query-optimization techniques, but also one of the most beneficial. Partition pruning enables a query to only access the necessary partitions, rather than accessing an entire table – frequently, partition-pruning alone can speed up a query by two orders of magnitude. Partition-wise joins provide similar performance benefits when joining tables that are partitioned by the same key. Together these partitioning optimizations are fundamental for accelerating performance for queries on very large database objects.

Oracle Database 12c Release 2 Rapid Home Provisioning and Maintenance

The query performance techniques described here operate in a concerted fashion, and provide multiplicative performance gains. For example, a single query may be improved by 10x performance via partition-pruning, by 5x via parallelism, by 20x via star query optimization, and by 10x via Exadata smart scans – a net improvement of 10,000x compared to a naïve SQL engine.
Orchestrating the query capabilities of the Oracle database are several foundational technologies. Every query running in a data warehouse benefit from:

  • » A query optimizer that determines the best strategy for executing each query, from among all of the available execution techniques available to Oracle. Oracle’s query optimizer provides advanced query-transformation capabilities, and, in Oracle Database 12c, the query optimizer adds Adaptive Query Optimization, which enables the optimizer to make run-time adjustments to execution plans.
  • » A sophisticated resource manager for ensuring performance even in databases with complex, heterogeneous workloads. The Database Resource Manager allows end-users to be grouped into ‘resource consumer groups’, and for each group, the database administrator can set policies to govern the amount of CPU and IO resources that can be utilized, as well as specify policies for proactive query governing, and for query queuing. With the Database Resource Manager, Oracle provides the capabilities to ensure that data warehouse can address the requirements of multiple concurrent workloads, so that a single data warehouse platform can, for example, simultaneously service hundreds on online business analysts doing ad hoc analysis in a business intelligence tool, thousands of business users viewing a dashboard, and dozens of data scientists doing deep data exploration.
  • » Management Packs to automate the ongoing performance tuning of a data warehouse. Based upon the ongoing performance and query workload, management packs provide recommendations for all aspects of performance, including indexes and partitioning.

More Information:













The NEW Oracle Database Appliance Portfolio   https://go.oracle.com/LP=55375?elqCampaignId=52477&src1=ad:pas:go:dg:oda&src2=wwmk160603p00096c0015&SC=sckw=WWMK160603P00096C0015&mkwid=sFw6OzrF5%7Cpcrid%7C215765003921%7Cpkw%7Coracle%20database%7Cpmt%7Cp%7Cpdv%7Cc%7Csckw=srch:oracle%20database


Oracle Database 12c Release 2 - Get Started with Oracle Database   https://docs.oracle.com/database/122/index.htm





25 July 2017

DB2 12 for z/OS – The #1 Enterprise Database

Some IBM DB2 12 highlights:

  • Improved business insight: highly concurrent queries run up to 100x faster.
  • Faster mobile support: 6 million transactions per minute via RESTful API.
  • Enterprise scalability, reliability and availability for IoT apps: 11.7 million inserts per second, 256 trillion rows per table.
  • Reduced cost: 23 percent lower CPU cost through advanced in-memory techniques?

DB2 12 Overview Nov UG 2016 Final

Links for the above video:

Strategy and Directions for the IBM® Mainframe

Machine Learning for z/OS

Temporal Tables, Transparent Archiving in DB2 for z/OS and IDAA

IBM Z Software z14 Announcement

Throughout the development of the all new IBM z14, we have worked closely with dozens of clients around the world to understand what they need to accelerate their digital transformation, securely. What we learned was data security was foundational to everything they do, they’re striving to leverage that data to gain a competitive edge, and ultimately everybody’s trying to move faster to compete at the speed of business.

Data is the New Security Perimeter with Pervasive Encryption

Job #1 is protecting their confidential data and that of their clients from both internal and external threats.  The z14 introduces pervasive encryption as the new standard with 100% of the data encrypted at-rest and in-motion, uniquely able to bulk encrypt 100% of their data in both IBM Information Management System (IMS), IBM DB2 for z/OS, and Virtual Storage Access Method (VSAM) with no changes to their applications and no impact to the SLAs.
IBM MQ for z/OS already encrypts messages from end-to-end with its Advanced Message Security feature. On the new z14, MQ can scale to greater heights with the 7X boost in on-chip encryption performance compared to z13.

Additionally, with secure services containers, z14 can prevent data breaches by rogue administrators by restricting root access via graphical user interfaces. One of the many differentiating security features provided with IBM’s Blockchain High Security Business Network delivered in the IBM Cloud.

DB2 12 Technical Overview PART 1

DB2 12 Technical Overview PART 2

Ever-evolving Intelligence with Machine Learning

Data is the world’s next great natural resource. Our clients are looking to gain a competitive edge with the vast amounts of data they have and turn insights into actions in real time when it matters.  IBM Machine Learning for z/OS can decrease the time businesses take to continuously build, train, and deploy intelligent behavioral models by keeping the data on IBM Z where it is secure.  They can also take advantage of IBM DB2 Analytics Accelerator for z/OS’s new Zero Latency technology, which uses a just-in-time protocol for data coherency for analytic requests to train and retrain their models on the fly.

IBM Z provides the agility to continuously deliver new function via microservices, API’s or more traditional applications.
Innovate with Microservices and leverage open source.

Microservices can be built on z14 with Node.js, Java, Go, Swift, Python, Scala, Groovy, Kotlin, Ruby, COBOL, PL/I, and more.  They can be deployed in Docker containers where a single z14 can scale out to 2 million Docker containers.  These services can run up to 5X faster when co-located with the data they need on IBM Z.  The data could be existing data on DB2 or IMS or it could be using open source technologies such as MariaDB, Cassandra, or MongoDB.  On z14, a single instance of MongoDB can hold 17 TB of data without sharding!

What's new from the optimizer in DB2 12 for z/OS?

Another DB2 LUW Version 11.1 highlight is the capability to now deploy DB2 pureScale on IBM Power systems with little endien Linux operating systems. This approach works with both the vanilla Transmission Control Protocol/Internet Protocol (TCPIP)—aka sockets—as well as higher-speed Remote direct memory Access (RDMA) over Converged Ethernet (RoCE) networks. And, expectedly, it provides all of DB2 pureScale’s availability advantages, including online member recovery and rolling updates, and DB2 pureScale’s very strong scalability attributes. Here is an example of the throughput scaling experienced in the lab for an example OLTP workload running both TCP/IP, and RoCE:

What's new in the DB2 12 base release

DB2® 12 for z/OS® takes DB2 to a new level, both extending the core capabilities and empowering the future. DB2 12 extends the core with new enhancements to scalability, reliability, efficiency, security, and availability. DB2 12 also empowers the next wave of applications in the cloud, mobile, and analytics spaces.
This information might sometimes also refer to DB2 12 for z/OS as "DB2" or "Version 12."

DB2 12 for z/OS - Catch the wave early and stay ahead!

Continuous delivery and DB2 12 function levels

DB2 12 introduces continuous delivery of new capabilities and enhancements in a single service stream as soon as they are ready. The result is that you can benefit from new capabilities and enhancements without waiting for an entire new release. Function levels enable you to control the timing of the activation and adoption of new features, with the option to continue to apply corrective and preventative service without adopting new feature function.

New capabilities and enhancements in the DB2 12 base release
Most new capabilities in DB2 12 are introduced in DB2 12 function levels. However, some become available immediately in the base DB2 12 release, or when you apply maintenance.

Highlighted new capabilities in the DB2 12 base release
After the initial release, most new capabilities in DB2® 12 are introduced in DB2 12 function levels. However, some new capabilities become available immediately in the base DB2 12 release.

For information about new capabilities and enhancements in DB2 12 function levels, see What's new in DB2 12 function levels. The following sections describe new capabilities and enhancements introduced in the DB2 base (function levels 100 or 500) after general availability of DB2 12.

DevOps with DB2: Automated deployment of applications with IBM Urban Code Deploy:
With Urban Code Deploy, you can easily automate the deployment and configuration of database schema changes in DB2 11 and DB2 12. The automation reduces the time, costs, and complexity of deploying and configuring your business-critical apps, getting you to business value faster and more efficiently.

Modern language support DB2 for z/OS application development:
DB2 11 and DB2 12 now support application development in many modern programming and scripting languages. Application developers can use languages like Python, Perl, and Ruby on Rails to write DB2 for z/OS applications. Getting business value from your mainframe applications is now more accessible than ever before.

DB2 REST services improve efficiency and security:
The DB2 REST service provider, available in DB2 11 and DB2 12, unleashes your enterprise data and applications on DB2 for z/OS for the API economy. Mobile and cloud app developers can efficiently create consumable, scalable, and RESTful services. Mobile and cloud app developers can consume these services to securely interact with business-critical data and transactions, without special DB2 for z/OS expertise.

Overview of DB2 12 new function availability

The availability of new function depends on the type of enhancement, the activated function level, and the application compatibility levels of applications. In the initial DB2 12 release, most new capabilities are enabled only after the activation of function level 500 or higher.

Virtual storage enhancements
Virtual storage enhancements become available at the activation of the function level that introduces them or higher. Activation of function level 100 introduces all virtual storage enhancements in the initial DB2 12 release. That is, activation of function level 500 introduces no virtual storage enhancements.

Temporal Tables, Transparent Archiving in DB2 for z/OS and IDAA

Subsystem parameters
New subsystem parameter settings are in effect only when the function level that introduced them or a higher function level is activated. All subsystem parameter changes in the initial DB2 12 release take effect in function level 500. For a list of these changes, see Subsystem parameter changes in the DB2 12 base release.

Optimization enhancements
Optimization enhancements become available after the activation of the function level that introduces them or higher, and full prepare of the SQL statements. When a full prepare occurs depends on the statement type:

For static SQL statements, after bind or rebind of the package
For non-stabilized dynamic SQL statements, immediately, unless the statement is in the dynamic statement cache
For stabilized dynamic SQL statements, after invalidation, free, or changed application compatibility level

Activation of function level 100 introduces all optimization enhancements in the initial DB2 12 release. That is, function level 500 introduces no optimization enhancements.

SQL capabilities
New SQL capabilities become after the activation of the function level that introduces them or higher, for applications that run at the equivalent application compatibility level or higher. New SQL capabilities in the initial DB2 12 release become available in function level 500 for applications that run at the equivalent application compatibility level or higher. You can continue to run SQL statements compatibly with lower function levels, or previous DB2 releases, including DB2 11 and DB2 10.

The demands of the mobile economy and the greater need for faster business insights, combined with the explosive growth of data, present unique opportunities and challenges for companies wanting to take advantage of their mission-critical resources. Built on the proven, trusted availability, security, and scalability of DB2 11 for z/OS and the z Systems platform, the gold standard in the industry, DB2 12 gives you the capabilities needed to securely meet the business demands of mobile workloads and increased mission-critical data. It delivers world-class analytics and OLTP performance in real-time.

DB2 for z/OS delivers innovations in these key areas:

Scalable, low-cost, enterprise OLTP and analytics

DB2 12 continues to improve upon the value offered with DB2 11 with further CPU savings and performance improvements utilizing more memory optimization. Compared to DB2 11, DB2 12 clients can achieve up to 10% CPU savings for various traditional OLTP, and heavy concurrent INSERT query workloads may see higher benefits, with up to 30% CPU savings and even more benefit for select query workload utilizing UNION ALL, large sort, and selective user-defined functions (UDFs).

DB2 12 provides more cost reduction with more zIIP eligibility of DB2 REORG and LOAD utility.

DB2 12 provides deep integration with the IBM z13, offering the following benefits:

More efficient use of compression
Support for compression of LOB data (also available with the IBM zEnterprise EC12)
Faster XML parsing through the use of SIMD technology
Enhancements to compression aids DB2 utility processing by reducing elapsed time and CPU consumption with the potential to improve data and application availability. Hardware exploitation to support compression of LOB data can significantly reduce storage requirements and improve overall efficiency of LOB processing.

DB2 12 includes the new SQL TRANSFER OWNERSHIP statement, enabling better security and control of objects that contain sensitive data. In addition, DB2 12 enables system administrators to migrate and install DB2 systems while preventing access to user data.

The real-world proven, system-wide resiliency, availability, scalability, and security capabilities of DB2 and z Systems continue to be the industry standard, keeping your business running when other solutions may not. This is especially important as enterprises support dynamic mobile workloads and the explosion of data in their enterprises. DB2 12 continues to excel and extend the unique value of z Systems, while empowering the next wave of applications.

Easy access, easy scale, and easy application development for the mobile enterprise:

In-memory performance improvements

As enterprises manage the emergence of the next generation of mobile applications and the proliferation of the IoT, database management system (DBMS) performance can become a critical success factor. To that end, DB2 12 contains many features that exploit in- memory techniques to deliver world-class performance, including:

  • In-memory fast index traverse
  • Contiguous and larger buffer pools
  • Use of in-memory pipes for improved insert performance
  • Increased sort and hash in-memory to improve sort and join performance
  • Caching the result of UDFs
  • In-memory optimization in Declare Global Temporary Table (DGTT) to improve declare performance
  • In memory optimization in Resource Limit Facility to improve RLF checking

DB2 12 offers features to facilitate the successful deployment of new analytics and mobile workloads. Workloads connecting through the cloud or from a mobile device may not have the same performance considerations as do enterprise workloads. To that end, DB2 12 has many features to help ensure that new application deployments are successful. Improvements for sort-intensive workloads, workloads that use outer joins, UNION ALL, and CASE expressions can experience improved performance and increased CPU parallelism offload to zIIP.

Easy access to your enterprise systems of record

DB2 12 is used to connect RESTful web, mobile, and cloud applications to DB2 for z/OS, providing an environment for service, management, discovery, and invocation. This feature works with IBM z/OS Connect Enterprise Edition (z/OS Connect EE,5655-CEE) and other RESTful providers to provide a RESTful solution for REST API definition and deployment.

The IBM Data Studio product, which can be used as the front-end tooling to create, deploy, or remove DB2 for z/OS services, is supported. Alternatively, new RESTful management services and BIND support are provided to manage services created in DB2 for z/OS. This capability was first made available in the DB2 Adapter for z/OS Connect feature of the DB2 Accessories Suite for z/OS, V3.3 (5697-Q04) product, working with both DB2 10 for z/OS and DB2 11 for z/OS.

Overview of DB2 features

DB2 12 for z/OS consists of the base DB2 product with a set of optional separately orderable features. Select features QMF Enterprise Edition V12 and QMF Classic Edition V12 are also made available as part of DB2 11 for z/OS (5615-DB2). Some of these features are available at no additional charge and others are chargeable:

Chargeable features for QMF V12 (features of DB2 12 for z/OS and DB2 11 for z/OS)

QMF Enterprise Edition provides a complete business analytics solution for enterprise-wide business information across end-user and database platforms. QMF Enterprise Edition consists of the following capabilities:

  • QMF for TSO and CICS
  • QMF Enhanced Editor (new)
  • QMF Analytics for TSO
  • QMF High Performance Option (HPO)
  • QMF for Workstation
  • QMF for WebSphere
  • QMF Data Service, including QMF Data Service Studio (new)
  • QMF Vision (new)

New enhancements for each capability are as follows:

QMF for TSO and CICS has significant improvements for the QMF for TSO/CICS client.

The QMF process of saving database tables, traditionally accomplished through the QMF SAVE DATA command, has been enhanced. QMF SAVE DATA intermediate results can now be saved to IBM DB2 Analytics Accelerator for z/OS 'Accelerator-only tables'. The ability to save intermediate results in Accelerator-only tables is also available for the command IMPORT TABLE and the new QMF RUN QUERY command with the TABLE keyword. This exploitation of the Accelerator may result in benefits such as improved performance, reduced batch window allocation for QMF applications, and reduced storage requirements.
By using the new TABLE keyword on the RUN QUERY command, you can now save data, using the SAVE DATA command, without needing to return and complete a data object. The RUN QUERY command with the TABLE keyword operates completely within the database to both retrieve data and insert rows without returning a report to the user.
Usability of the TSO client is improved by the enhanced editor feature (see the QMF Enhanced Editor section for more detail.).
Both the TSO and CICS clients now have the ability to organize queries, procedures, forms, and analytics into groups called folders, aiding in productivity and usability. QMF commands such as LIST, SAVE, ERASE, and RENAME have been updated to work with folders.
QMF TSO and CICS clients now have additional report preview options. After proper setting of the DSQDC_DISPLAY_RPT global variable, users will be able to enter a report mini-session, where queries can be run to view potential output without actually committing the results. The report mini-session can be useful for running and testing SELECT with change type queries. Upon exiting the report mini-session, the user will be prompted to COMMIT or ROLLBACK the query.
With Version 12, QMF's TSO and CICS clients deliver significant performance and storage improvements.
Using the new QMF program parameter option DSQSMTHD, users can make use of a second data base thread. The second thread is to be used for RUN QUERY and DISPLAY TABLE command processing. Usage of a second data base thread can assist with performance issues on SAVE operations with an incomplete report outstanding. Additionally, usage of the second thread can reduce storage requirements for SAVE DATA commands on large report objects because rows will not need to reside in storage but can be retrieved from the data base and inserted into the new table as needed.
Using the DSQEC_BUFFER_SIZE global variable, the QMF internal storage area used to fetch data base row data can be increased. By changing the default from 4 kilobytes to a value up to 256 kilobytes, QMF can increase the amount of data fetched in a single call to the data base. Less calls to the data base reduces the amount of time it takes to complete the report, which can result in significant performance improvements.
QMF's TSO and CICS clients now integrates with QMF Data Service, enabling users of this interface to access a broader range of data sources. The support enables access to z/OS and non-z/OS data sources, including relational and nonrelational data sources (see the QMF Data Service section for a description of accessible data types). This capability is available only through QMF Enterprise Edition.
QMF Enhanced Editor (new) provides usability improvements to the TSO client by bringing customizable highlighting and formatting for SQL syntax, reserved words, functions, and data types, and parenthesis checking. The new query assist feature provides table name suggestions, column name and data type information, and suggested column value information, plus a preview pane.

QMF Analytics for TSO has been enhanced as follows:

Three new statistics models have been added: Wilcoxon Signed-Rank Test, Mann-Whitney U Test, and the F-Test model.
A user-defined mapping capability has been added. OpenGIS WKT map definitions are available in either DB2 tables or exported data sets, which can be read to format user-specific maps.
Maps for Africa, North America, South America, and Germany have been added to the existing library of predefined maps.
The ability to choose columns for use in analytical analyses has been improved with enhanced data type targeting and information.
Mouse (graphics cursor) support is added for quicker interaction with the QMF Analytics for TSO functionality.
Saving analytics has been updated to display a list of existing analytics objects.
QMF for Workstation and QMF for WebSphere add additional support for DB2 Analytics Accelerator and enable QMF objects to be used as virtual tables in QMF Data Service.

Administrators now have the ability to specify whether the DB2 Analytics Accelerator should be used by QMF users when available (by database and query) through new resource limit options on the data source or object.
QMF Workstation and QMF for WebSphere can now write data to the DB2 Analytics Accelerator. Data can be saved as Accelerator-only tables or Accelerator-shadow tables. Queries could then be created against this data, enabling them to take advantage of the DB2 Analytics Accelerator.
QMF will detect DB2 Analytics Accelerator appliances and display these appliances under the data source. Users can also see tables that exist on the DB2 Analytics Accelerator and even add additional tables to the DB2 Analytics Accelerator by dragging and dropping tables into the appliance folder.
QMF-prepped data will be made accessible as virtual tables or stored procedures to external applications through data service connectors such as:
Mainframe Data Service for Apache Spark on z/OS
Rocket DV
Rocket Mainframe Data Service on IBM Bluemix
IBM DB2 Analytics Accelerator Loader
QMF Data Service enables DB2 QMF to access numerous data sources and greatly eliminates the need to move data in order to perform your analytics. It enables you to obtain real-time analytics insights using a high-performance in-memory mainframe solution.

The need for real-time information requires a high-performance data architecture that can handle the extreme volumes and unique requirements of mainframe data and that is transparent to the business user. DB2 QMF 's new data service includes several query optimization features, such as parallel I/O and MapReduce. Multiple parallel threads handle input requests, continually streaming and buffering data to the client. The mainframe MapReduce technology greatly reduces the elapsed time of the query by accessing the database with multiple threads that read the file in parallel.

Data definitions and schema information are extracted from a variety of places to create virtual tables. All of the implementation details are hidden to the user, presented instead as a single logical data source. The logical data source is easily administered through the new Eclipse-based QMF Data Service Studio. With QMF Data Service Studio, DB2 QMF now supports a broader range of data sources, including:

Mainframe: Relational/nonrelational databases and file structures: ADABAS, DB2,VSAM, and Physical Sequential; CICS and IMS
Distributed: Databases running on Linux, UNIX, and Microsoft Windows platforms: DB2, Oracle, Informix, Derby, and SQL Server
Cloud and big data: Cloud-based relational and nonrelational data, and support for Hadoop
Data prepared in QMF will be made accessible as virtual tables to external applications through Data Service connectors such as:

Mainframe Data Service for Apache Spark on z/OS
Rocket DV
Rocket Mainframe Data Service on IBM Bluemix
DB2 Analytics Accelerator Loader
QMF Vision (new) is a web client visualization interface that enables you to create, modify, and drill down on data visualizations that are displayed on a dashboard. Users have the ability to drag and drop whatever dimensions or measures are needed, or add more variables for increased drill-down capability. Column, pie, treemap, geo map, line,scatter charts and many more chart objects are available. This gives a business user the ability to analyze data and provide insights that might not be readily apparent.

The most commonly requested guided analytics capabilities, such as outlier detection and cardinality, are now provided out of the box. These capabilities are integrated into the architecture for an intuitive analysis experience. For one-off decision making, you can quickly create simple reports using the tabular chart, which gives you a line-by-line view of summary data. Reports can be formatted to produce multilevel grouping, hierarchical structures, and dynamic cross tabulations, all for greater readability.

This enhancement simplifies the sharing of insights and collaboration with other users. Dashboards can be dropped into the chat window and other users can immediately start collaborating. They can discuss performance results, strategy, and opportunities and discover new insights about the data. Users can connect to new data sources as well as work with existing QMF queries and tables.

QMF Classic Edition supports users working entirely on traditional mainframe terminals and emulators, including IBM Host On Demand, to access DB2 databases. QMF Classic Edition consists of the following capabilities in V12:

QMF for TSO and CICS
QMF Enhanced Editor
QMF Analytics for TSO
QMF High Performance Option (HPO)

Get the most out of DB2 for z/OS with modern application development language support

More Information: