• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

25 October 2016

Hyper-Converged OpenStack on Windows Nano Server 2016

Cloudbase Solutions Announces the Industry’s First Platform for Hyper-Converged OpenStack on Windows Nano Server 2016

The Hyper-Converged OpenStack on Windows Server cloud infrastructure enables distributed data across individual cloud servers while dismissing the need for expensive dedicated storage hardware. This particular configuration features all of its nodes having compute, storage and networking roles, thus increasing scalability and fault tolerance to new levels all the while dramatically reducing overall costs.

Hyper-Converged OpenStack on Windows Nano Server 2016

Cloudbase Solutions’ design for the Hyper-Converged data center relies on components that are fully distributed, and is entirely based on commodity hardware, having a remarkably low cost of ownership for the enterprise with the benefit of all the IaaS features offered by OpenStack, for both on-premise as well as public clouds.

Windows in OpenStack

The core components for this solution are OpenStack, Microsoft’s Windows Nano Server 2016, Hyper-V, Storage Spaces Direct (S2D) and Open vSwitch for Hyper-V, deployed starting from the bare metal up with Cloudbase Solutions’ Juju charms for Windows Server.

Cloudbase Solutions offers the platform as managed or unmanaged, with support for OpenStack and Windows Nano Server 2016, along with orchestration solutions based on OpenStack Heat templates or Juju for all Microsoft based workloads, from Active Directory to SharePoint, Exchange and more!

“The Hyper-Converged infrastructure adds simplicity, increased fault tolerance and scalability to your architecture, which is exactly what modern enterprises are looking for in order to compete efficiently. It’s important for OpenStack customers to know they have choices when it comes to their infrastructure, and we see the Hyper-Converged solution as a key to helping them in achieving that architectural freedom” - said Alessandro Pilotti, Cloudbase Solutions CEO

Manage Nano Server and Windows Server 2016 Hyper-V

About Cloudbase Solutions
Cloudbase Solutions™ is dedicated to cloud computing and interoperability. Our mission is to bridge the modern enterprise and cloud computing worlds by bringing OpenStack to Windows-based infrastructures. This effort starts with developing and maintaining all the crucial Windows and Hyper-V OpenStack components and culminates with a product range which includes orchestration for Hyper-V, SQL Server, Active Directory, Exchange and SharePoint Server via Juju charms and Heat templates.

Furthermore, to solve the complexity of cloud migration, Cloudbase Solutions developed Coriolis, a cloud migration-as-a-service product for migrating existing Windows and Linux workloads between clouds. Cloud migration is a necessity for a large number of use cases, especially for users moving from traditional virtualization technologies like VMware vSphere or Microsoft System Center VMM to Azure / Azure Stack, OpenStack, Amazon AWS or Google Cloud.


Building Your First Ceph Cluster for OpenStack— Fighting for Performance, Solving Tradeoffs

Ceph is a full-featured, yet evolving, software-defined storage (SDS) solution. It’s very popular because of its robust design and scaling capabilities, and it has a thriving open source community. Ceph provides all data access methods (file, object, block) and appeals to IT administrators with its unified storage approach.

In the true spirit of SDS solutions, Ceph can work with commodity hardware or, to put it differently, is not dependent on any vendor-specific hardware. A Ceph storage cluster is intelligent enough to utilize storage and compute the powers of any given hardware, and provides access to virtualized storage resources through the use of ceph-clients or other standard protocol and interfaces.

Ceph storage clusters are based on Reliable Automatic Distributed Object Store (RADOS), which uses the CRUSH algorithm to stripe, distribute and replicate data. The CRUSH algorithm originated from a PhD thesis by Sage Weil at the University of California, Santa Cruz. Here’s an overview of Ceph’s different ways for accessing stored data:

The power of Ceph can transform your organization’s IT infrastructure and your ability to manage vast amounts of data. If your organization runs applications with different storage interface needs, Ceph is for you! Ceph’s foundation is the Reliable Autonomic Distributed Object Store (RADOS), which provides your applications with object, block, and file system storage in a single unified storage cluster—making Ceph flexible, highly reliable and easy for you to manage.
Ceph’s RADOS provides you with extraordinary data storage scalability—thousands of client hosts or KVMs accessing petabytes to exabytes of data. Each one of your applications can use the object, block or file system interfaces to the same RADOS cluster simultaneously, which means your Ceph storage system serves as a flexible foundation for all of your data storage needs. You can use Ceph for free, and deploy it on economical commodity hardware. Ceph is a better way to store data.

OpenStack Australia Day 2016 - Andrew Hatfield, Red Hat: The Future of Cloud Software Defined Storage

The Ceph Storage Cluster
A Ceph storage cluster is a heterogeneous group of compute and storage resources (bare metal servers, virtual machines and even Docker instances) often called Ceph nodes, where each member of the cluster is either working as a monitor (MON) or object storage device (OSD). A Ceph storage cluster is used by Ceph clients to store their data directly as RADOS objects or by using virtualized resources like RDBs or other interfaces.

Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar

Windows and OpenStack - What's New in Windows Server 2016

Windows and OpenStack: What’s new in Windows Server 2016? - Alessandro Pilotti from ITCamp on Vimeo.

OpenStack is getting big in the enterprise, which is traditionally very Microsoft centeric. This session will show you everything you need to know about Windows in OpenStack!To begin with we will show how to provision Windows images for OpenStack, including Windows Server 2012 R2, Windows 7, 8.1 and the brand new Windows Server 2016 Nano Server for KVM, Hyper-V and ESXi Nova hosts.

Next, we will show how to deploy Windows workloads with Active Directory, SQL Server, SharePoint, Exchange using Heat templates, Juju, Puppet and more.

Last but not least, we'll talk about Active Directory integration in Keystone, Hyper-V deployment and Windows bare metal support in Ironic and MaaS. The session will give you a comprehensive view on how well OpenStack and Windows can be integrated, along with a great interoperability story with Linux workloads.

Exploring Nano Server for Windows Server 2016 with Jeffrey Snover

For More Information:












22 September 2016

IBM Power Systems for Big Data and Analytics

IBM Linux Servers Designed to Accelerate Artificial Intelligence, Deep Learning and Advanced Analytics

New IBM POWER8 Chip with NVIDIA NVLink(TM) Enables Data Movement 5x Faster than Any Competing Platform
Systems Deliver Average of 80% More Performance Per Dollar than Latest x86-Based Servers(1)
Expanded Linux Server Lineup Leverages OpenPOWER Innovations

A quick introduction to the IBM Power System S822LC from the IBM Client Center Montpellier

A major achievement stemming from open collaboration is the new IBM Power System S822LC for High Performance Computing server.

IBM Linux on Power Big Data Solutions

IBM Data Engine for Hadoop and Spark – Power Systems Edition

With more and more intelligent and interconnected devices and systems, the data companies are collecting is growing at unprecedented rates. As much as 90% of that data is unstructured, coming from social media, electronic documents, machine data, connected devices, etc., and growing at rates as high as 50% per year. This is big data.

Extracting insights from big data can make your business more agile, more competitive and provide insights that, in the past, were beyond reach. The emergence of recent technologies such as the real-time analytics processing capabilities of stream computing, high speed in-memory analytics using Apache Spark and the massive MapReduce scale-out capabilities of Hadoop® has opened the door to a world of possibilities. This has also created the need for robust infrastructures that combine computing power, memory and data bandwidth to process and move large quantities of data -- fast.

Understanding the IBM Power Systems Advantage

Based on this need, the IBM Power System S812LC was used to design a solution to create a big data environment built on a heritage of strong resiliency, availability and security -- the IBM Data Engine for Hadoop and Spark - Power Systems Edition.

With a data-centric design, this Linux-based solution offers a tightly-integrated and performance-optimized infrastructure for in-memory Spark and MapReduce-based Hadoop big data workloads. The IBM Data Engine for Hadoop and Spark can be tailored specifically to meet your Big Data workloads by using a simple building block approach to match the mix of memory, networking and storage to application requirements. This approach gives you the best possible infrastructure for your big data workload.

POWER8 Scale-Out: Massive Bandwidth

With a vision for enhanced bandwidth, IBM POWER8 has achieved vast improvements in latency, two-and-a-half time’s better memory performance, and a lot more.
POWER8 offers more than 32 channels of DDR memory funneling into the POWER8 processor. This is two times the 16-channel capacity for POWER7, and four times the eight-channel capacity of the most competitors.

Move Up to Power8 with Scale Out Servers

The result of a depth and breadth of innovation focused on optimizing for data centers, while increasing efficiency and lowering infrastructure cost, the POWER8 bandwidth contributes to a better system that does more while making technology leadership attainable for customers.

Each POWER8 socket supports up to 1 TB of DRAM in the initial server configurations, yielding 2 TB capacity Scale-out systems and 16 TB capacity Enterprise systems, and supports up to 230 GBs per second of sustained memory bandwidth per socket.
Having developed the first processor designed for Big Data with massive parallelism and bandwidth for real-time results, when coupled with IBM DB2 with BLU Acceleration and Cognos analytics software the capacity of POWER8 far outpaces industry standard options with 82x faster delivery to insights

Far more than a function of size, sophisticated innovations in the POWER8 memory organization is designed to enhance both reliability and performance. Key among the innovations:
Up to eight high-speed channels which each run up to 9.6 GHz for up to 230 GB of sustained performance
Up to 32 total DDR ports yielding 410 GB/sec peak at the DRAM
Up to 1 TB memory capacity per fully configured processor socket

Big Data’s Big Memory requirements call for nothing less than the industry’s most innovative, scalable, and massive bandwidth and capacity. POWER8 thrives on the kinds of complexities that your organization faces in the current environment, with a platform to keep you ahead of the game as unforeseen challenges and opportunities emerge.

Features and benefits

A comprehensive, fully integrated cluster that is designed for ease of procurement, deployment, and operation. It includes all required components for Big Data applications, including servers, network, storage, operating system, management software, Hadoop and Spark software, and runtime libraries.

An application optimized configuration. The configuration of the cluster is carefully designed to optimize application performance and reduce total cost of ownership. The cluster is integrated with IBM Platform™ Cluster Manager, IBM Open Platform with Apache Hadoop and Spark and optionally IBM Spectrum Scale and IBM Spectrum Symphony which include advanced capabilities for storage and resource optimization. This optimized configuration enables users to show results more quickly.
Power S812LC delivers 2.3X BETTER performance per dollar spent for Spark workloads1

Advanced technology for performance and robustness. The hardware and software components in this infrastructure are customizable to allow the best performance or the best price/performance ratio.

Big data clusters can start out small and grow as the demands from line of business increase. Choosing an infrastructure that can scale to handle these demands is vital to meeting service level agreements and continuing access to insights. Organizations must also consider the maintenance required. Smart businesses choose Power Systems because they know Power Systems is built for big data workloads that demand high performance and high reliability.

Analytics solutions
Unlock the value of data with an IT infrastructure that provides speed and availability to deliver accelerated insights to the people and processes that need them.

IBM Data Engine for Analytics - Power Systems Edition
A customized infrastructure solution with integrated software optimized for both big data and analytics workloads.

Co-Design Architecture for Exascale

IBM POWER8 as an HPC platform

The State of Linux Containers

IBM Data Engine for NoSQL – Power Systems Edition
Unique technology from IBM delivers dramatic reductions in the cost of large NoSQL databases.

SAP HANA benefits from the enterprise capabilities of Power Systems
SAP HANA runs on all POWER8 servers. Power Systems Solution Editions for SAP HANA BW are easy to order and tailored for quick deployment and rapid-time-to value, while offering flexibility to meet individual client demands.

DB2 with BLU Acceleration on Power Systems
Enable faster insights using analytics queries and reports from data stored in any data warehouse, with a dynamic in-memory columnar solution.

IBM Solution for Analytics – Power Systems Edition
This flexible integrated solution for faster insights includes options for business intelligence and predictive analytics with in-memory data warehouse acceleration.

IBM Data Engine for Hadoop and Spark – Power Systems Edition
A fully integrated Hadoop and Spark solution optimized to simplify and accelerate unstructured big data analytics.

OpenPOWER Update

IBM PureData System for Operational Analytics
Easily deploy, optimize and manage data intensive workloads for operational analytics with an expert integrated system.

IBM DB2 Web Query for i
Help ensure every decision maker across the organization can easily find, analyze and share the information needed to make better, faster decisions.

OpenPOWER Roadmap Toward CORAL

The Quantum Effect: HPC without FLOPS

More Information:



















21 August 2016

Why Cortana Analytics Suite

Cortana Analytics Suite (CAS), what can it do for you

Microsoft introduced the Cortana Analytics Suite (CAS) in July 2015, at the Worldwide Partner Conference in Orlando. Want to learn more then read on.

Cortana Analytics Suite

When Microsoft first announced CAS, it touted the suite as an integrated set of cloud-based services that vaguely promised to be “a huge differentiator for any business.” The suite would be available through a simple monthly subscription and be customizable to fit the needs of different organizations. The company planned to make CAS available that coming fall.

Two months later, Microsoft hosted the first-ever Cortana Analytics Workshop, a gathering of techies that would provide participants with a chance to learn about Microsoft’s advanced analytics vision. The workshop appeared to represent the suite’s official launch.

Microsoft Envision | Impactful analytics using the Cortana Intelligence Suite with EY

At some point during the build-up, Microsoft also set up a slick new website dedicated to the CAS vision ( https://www.microsoft.com/en-us/server-cloud/cortana-analytics-suite/). The website featured rolling graphics with stylized icons, and large bold headlines that emphasized the suite’s imminent importance. Cortana Analytics, it would seem, had officially arrived.

As we can see from the above architecture diagram, following are the key pillars of Cortana Intelligence Suite:

Information Management: Consists of services which enable us to capture the incoming data from various sources including the streaming data from sensors, devices, and other IoT systems.  Manage various data sources which are part of the data analytics ecosystem within the enterprise; and orchestrate and build end-to-end flows to perform various activities and data processing and data preparation operations.
Big Data Stores: Consists of services which enable us to store and manage large scale data. In other words, enables us to store and manage big data. These services offer high degree of elasticity, high processing power, and high throughput with great performance.
Machine Learning and Analytics: Consists of services which enable us to perform advanced analytics, build predictive models, and apply machine learning algorithms on large scale data.  Allows us to perform data analysis on large scale data of different variety using programming languages like R and Python.
Dashboards and Visualizations: Consists of services which enable us to build reports and dashboards to view the insights. It primarily consists of Power BI which allows us to build highly interactive visually appealing reports and dashboards. Apart from this, other tools like SQL Server Reporting Services (SSRS), Excel, etc. can also be used to connect to data from some of these services in Cortana Intelligence Suite.
Intelligence: Consists of advanced intelligence services which enable us to build smart interactive services using advanced text, speech, and other recognition systems.

  • “Take action ahead of your competitors by going beyond looking in the rear-view mirror to predicting what’s next.”
  • “Get closer to your customers. Infer their needs through their interaction with natural user interfaces.”
  • “Get things done with Cortana in more helpful, proactive, and natural ways.”

Modern Data Warehousing with the Microsoft Analytics Platform System

Cortana Intelligence Suite Highlights

Here are the highlights of Cortana Intelligence Suite:

  • A fully managed Big Data and Advanced Analytics Suite enabling businesses transform data into intelligent actions.
  • An excellent offering perfectly suited for handling modern day data sources, data formats, and data volumes to gain valuable insights.
  • Offers various preconfigured solutions like Forecasting, Churn, Recommendations, etc.
  • Apart from the big data and analytical services, Cortana Intelligence Suite also includes some of the advanced intelligence services - Cortana, Bot Framework, and Cognitive Services.
  • Contains services to capture the data from a variety of data sources, process and integrate the data, perform advanced analytics, visualize and collaborate, and gain intelligence out of it.
  • Offers all the benefits of Cloud Computing like scale, elasticity, and pay-as-you-go model, etc.

Microsoft Envision | Running a data driven company

Use Cases for the Cortana Intelligence Suite

Cortana Intelligence Suite can address the data challenges in various industries and enable them to transform their data into intelligent actions and helps to be more proactive in the day-to-day operational aspects of the business. Of the various industries where Cortana Intelligence Suite can be used, here are a few of them.

Financial Services: Monitor the transactions as they happen in near real-time and based on the analysis on the historical data and historical data anomalies/trends, Cortana Intelligence Suite can be used to apply complex machine learning algorithms and predictive models to predict a potential fraudulent transactions and help business prevent such transactions in future thereby protecting customer's valuable money. The Financial Services sector is pretty vast and we can use Cortana Intelligence Suite in various scenarios including credit/debit card fraud, electronic transfer fraud, phishing attempts to steal confidential customer data, etc.
Retail: Cortana Intelligence Suite can be used across the Retail Industry in various scenarios including optimizing availability by forecasting demand, enabling businesses to ensure the right products in the right location at the right time. There are numerous use cases in the retail industry and Cortana Intelligence Suite can be used in conjunction with IoT systems. For instance, with the help of sensors (Beacon Technology) we can detect when a customer enters a retail store and based the data that we have in the database about that customer, we can offer them targeted discounts based on customer's demographics, past purchase history, what the customer has been browsing online (this is where bringing in the data from outside the enterprise comes into picture as discussed in this tip on Introduction to Big Data), and other relevant information which can help understand the customer's preferences.
Healthcare: There are various scenarios in Healthcare where the Cortana Intelligence Suite can be used. Historical data on the utilization of various resources (Rooms, Beds, Other Equipment, etc.) and manpower (Doctors, Nurses, general staff, etc.) can be analyzed to predict the future demand thereby enabling the hospitals to mobilize and optimize the resources and manpower accordingly. Historical patient data can be analyzed in conjunction with weather data to identify the patterns and potential illness that might be caused during particular seasons and help the authorities take preventive measures.
Manufacturing: By constantly monitoring the equipment and collecting the data over time, probability of issues occurring can be predicted and accordingly a maintenance schedule can be defined to prevent the potential issues which if occur can hamper the production and day-to-day operations leading to unhappy customers, loss of business, and increased operational costs. Cortana Intelligence Suite fits very well in this scenario and enables end to end data collection, monitoring, alerting, and to take proactive actions/decisions.
Public Sector: There are various areas in the public sector where Cortana Intelligence Suite can be used to improve the overall operational efficiency including Public Transport, Power Grids, Water Supplies, and a lot more. By monitoring the usage of resources in various areas, we can identify the patterns in the usage, predict and forecast the demand, and accordingly ensure the supply so that there is neither shortage nor a waste of resources thereby improving the overall operational efficiency and happy customers.

Microsoft Envision | ZAP presents: It’s all about the data --big, small, or diverse

Above are just a glimpse of scenarios in each of those sectors and there are many more such scenarios in each of the sectors. Apart from these, there are various other countless sectors where the Cortana Intelligence Suite can be used like Education, Insurance, Marketing, Hospitality, Aviation, Research, and so on.

The Azure side of Cortana Analytics Suite

When it comes to the individual Azure services, we can often find more concrete information than we can with Cortana Analytics. That’s not to say we won’t run into the same type of marketing clutter, but we can usually find details that are a bit more specific (even if it means going outside of Microsoft). What we don’t find are many references to Cortana Analytics, although that doesn’t prevent us from building the types of solutions that the CAS marketing material likes to show off.

The first of the CAS-related services have to do with storing and processing large sets of data:

Azure Data Warehouse : A database service that can distribute workloads across multiple compute nodes in order to process large volumes of relational and non-relational data. The service uses Microsoft’s massive parallel processing (MPP) architecture, along with advanced query optimizers, making it possible to scale out and parallelize complex SQL queries.

Azure Data Lake Store: A scalable storage repository for data of any size, type, or ingestion speed, regardless of where it originates. The repository uses a Hadoop file system to support compatibility with the Hadoop Distributed File System (HDFS) and offers unlimited storage without restricting file sizes or data volumes.

Azure Data Lake Store is actually part of a larger unit that Microsoft refers to as Azure Data Lake. Not only does it include Data Lake Store, but also Data Lake Analytics and HDInsight, both of which share the CAS label. You can find additional information about the Data Lake services in the Simple-Talk article Azure Data Lake.

The next category of services that fall under the CAS umbrella focus on data management:

Azure Data Factory : A data integration service that uses data flow pipelines to manage and automate the movement and transformation of data. Data Factory orchestrates other services, making it possible to ingest data from on-premises and cloud-based sources, and then transform, analyze, and publish the data. Users can monitor the pipelines from a single unified view.

Azure Data Catalog : A system for registering enterprise data sources, understanding the data in those source, and consuming the data. The data remains in its location, but the metadata is copied to the catalog, where it is indexed for easy discovery. In addition, data professionals can contribute their knowledge in order to enrich the source metadata.

Azure Event Hubs : An event processing service that can ingest millions of events per second and make them available for storage and analysis. The service can log events in near real time and accept data from a wide range of sources. Event Hubs uses technologies that support low latency and high availability, while providing flexible throttling, authentication, and scalability.

Microsoft Envision | Advantage YOU: Be more, do more, with Infosys and Microsoft on your side

For more information about Event Hubs, refer to the Simple-Talk article Azure Event Hubs. In the meantime, here’s a quick overview of the analytic components included in the CAS package:

Azure Machine Learning : A service for building, deploying, and sharing predictive analytic solutions. The service runs predictive models that learn from existing data, making it possible to forecast future behavior and trends. Machine Learning also provides the tools necessary for testing and managing the models as well as deploying them as web services.
Azure Data Lake Analytics : A distributed service for analyzing data of any size, including what is in Data Lake Store. Data Lake Analytics is built on Apache YARN, an application management framework for processing data in Hadoop clusters. Data Lake Analytics also supports U-SQL, a new language that Microsoft developed for writing scalable, distributed queries that analyze data.

Azure HDInsight : A fully managed Hadoop cluster service that supports a wide range of analytic engines, including Spark, Storm, and HBase. Microsoft has updated the service to take advantage of Data Lake Store and to maximize security, scalability, and throughput.
Azure Stream Analytics : A service that supports complex event processing over streaming data. Stream Analytics can handle millions of events per second from a variety of sources, while correlating them across multiple streams. It can also ingest events in real-time, whether from one data stream or multiple streams.
I’ve already mentioned how Data Lake Analytics and HDInsight are part of Azure Data Lake, and I’ve pointed you to a related article. If you want to learn more about Stream Analytics, check out the Simple-Talk article Microsoft Azure Stream Analytics.

Azure Stream Analytics

Cortana Analytics Gallery

Another interesting component of the CAS package is the Cortana Analytics Gallery, formerly the Azure Machine Learning Gallery. The gallery provides an online environment for data scientists and developers to share their solutions, particularly those related to machine learning. Microsoft also publishes its own solutions to the site for participants to consume. Cortana Analytics gallery

The Cortana Analytics Gallery is divided into the following six sections.

Solution Templates : Templates based on industry-specific partner solutions. Currently, the category includes only the Vehicle Telemetry Analytics solution, published by Microsoft this past December. The solution demonstrates how those in the automobile industry can gain real-time and predictive insights into vehicle health and driving habits.
Experiments : Predictive analytic experiments contributed by Microsoft and those in the data science community. The experiments demonstrate advanced machine learning techniques and can be used as a starting point for developing your own solutions. For example, the Telco Customer Churn experiment uses classification algorithms to predict whether a customer will churn.
Machine Learning APIs : APIs that can access operationalized predictive analytic solutions. Some of the APIs are reference within the “Perceptual intelligence” section listed in the table above. For example, the Face APIs were published by Microsoft and are part of Microsoft Project Oxford. They provide state-of-the-art algorithms for processing face images.
Notebooks : A collection of Jupyter notebooks. The notebooks are integrated within Machine Learning Studio and serve as web applications for running code, visualizing data, and trying out ideas. For example, the notebook Topic Discovery in Twitter Tweets demonstrates how a Jupyter notebook can be used for mining Twitter text.
Tutorials : Tutorials on how to use Cortana Analytics to solve real-world problems. For example, the iPhone app for RRS tutorial describes how to create an iOS app that can consume an Azure ML RRS API using the Xamarin development software that ships with Visual Studio.
Collections : A site for grouping together experiments, templates, APIs, or other items within the Cortana Analytics Gallery.
Although Microsoft has changed the name of the gallery to make it more CAS-friendly, much of the content still focuses on the Machine Learning service. Even so, the gallery could prove to be a valuable resource for organizations jumping aboard the CAS train, particularly once the gallery has gained more momentum.

Cortana Analytics Workshop: The "Big Data" of the Cortana Analytics Suite, Part 1

Cortana Analytics Workshop: The "Big Data" of the Cortana Analytics Suite, Part 2

More Information:

20 July 2016

Getting Started with Oracle OpenStack

Getting Started with OpenStack in Oracle Solaris 11.3


Getting Started with Oracle OpenStack

Oracle Solaris 11 includes a complete OpenStack distribution called Oracle OpenStack for Oracle Solaris. OpenStack, the popular open source cloud computing platform, provides comprehensive self-service environments for sharing and managing compute, network, and storage resources through a centralized web-based portal.

Oracle Solaris Overview

OpenStack has been integrated into all the core technology foundations of Oracle Solaris, allowing you to set up an enterprise private cloud infrastructure in minutes

Simplify Cloud Deployment with Oracle

Why OpenStack on Oracle Solaris?

Using OpenStack with Oracle Solaris provides the following advantages:

Industry-proven hypervisor. Oracle Solaris Zones offer significantly lower virtualization overhead making them a perfect fit for OpenStack compute resources. Oracle Solaris Kernel Zones also provide independent kernel versions without compromise, allowing independent patch versions.

Oracle Solaris Simple, Flexible, Fast: Virtualization in 11.3

Secure and compliant application provisioning. 

Oracle - Secure, Containerized and Highly-Available OpenStack on  

The Unified Archive feature of Oracle Solaris enables rapid application deployment in the cloud via a new archive format that enables portability between bare-metal systems and virtualized systems. Instant cloning in the cloud enables you to scale out and to reliably deal with disaster recovery emergencies.

Oracle Solaris Secure Cloud Infrastructure

Unified Archives in Oracle Solaris 11, combined with capabilities such as Immutable Zones for read-only virtualization and the new Oracle Solaris compliance framework, enable administrators to ensure end-to-end integrity and can significantly reduce the ongoing cost of compliance.

Oracle Solaris Build and Run Applications Better on 11.3

Fast, fail-proof cloud updates. Oracle Solaris makes updating OpenStack an easy and fail-proof process, updating a full cloud environment in less than twenty minutes. Through integration with the Oracle Solaris Image Packaging System (IPS), ZFS boot environments ensure quick rollback in case anything goes wrong, allowing administrators to quickly get back up and running.

Oracle Solaris Cloud Management and Deployment with OpenStack

Application-driven software-defined networking. Taking advantage of Oracle Solaris network virtualization capabilities, applications can now drive their own behavior for prioritizing network traffic across the cloud. The Elastic Virtual Switch (EVS) feature of Oracle Solaris provides a single point of control and enables the management of tenant networks through VLANs and VXLANs. The networks are flexibly connected to virtualized environments that are created on the compute nodes.

Oracle Solaris Software Integration

Single-vendor solution. Oracle is the #1 enterprise vendor offering a full-stack solution that provides the ability to get end-to-end support from a single vendor for database as a service (DaaS), platform as a service (PaaS), and infrastructure as a service (IaaS), saving significant heartache and cost.

Oracle Solaris 11 includes the OpenStack Juno release (Oracle Solaris 11.2 SRU 10.5 or Oracle Solaris 11.3).

Available OpenStack Services

The following OpenStack services are available in Oracle Solaris 11:

Nova. Nova provides the compute capability in a cloud environment, allowing self-service users to be able to create virtual environments from an allocated pool of resources. A driver for Nova has been written to take advantage of Oracle Solaris non-global zones and kernel zones.

Neutron. Neutron manages networking within an OpenStack cloud. Neutron creates and manages virtual networks across multiple physical nodes so that self-service users can create their own subnets that virtual machines (VMs) can connect to and communicate with. Neutron uses a highly extensible plug-in architecture, allowing complex network topologies to be created to support a cloud environment. A driver for Neutron has been written to take advantage of the network virtualization features of Oracle Solaris 11 including the Elastic Virtual Switch that automatically creates the tenant networks across multiple physical nodes.

Cinder. Cinder is responsible for block storage in the cloud. Storage is presented to the guest VMs as virtualized block devices known as Cinder volumes. There are two classes of storage: ephemeral volumes and persistent volumes. Ephemeral volumes exist only for the lifetime of the VM instance, but will persist across reboots of the VM. Once the instance has been deleted, the storage is also deleted. Persistent volumes are typically created separately and attached to an instance. Cinder drivers have been written to take advantage of the ZFS file system, allowing volumes to be created locally on compute nodes or served remotely via iSCSI or Fibre Channel. Additionally, a Cinder driver exists for Oracle ZFS Storage Appliance.

Glance. Glance provides image management services within OpenStack with support for the registration, discovery, and delivery of images that are used to install VMs created by Nova. Glance can use different storage back ends to store these images. The primary image format that Oracle Solaris 11 uses is Unified Archives. Unified Archives can be provisioned across both bare-metal and virtual systems, allowing for complete portability in an OpenStack environment.

Keystone. Keystone is the identity service for OpenStack. It provides a central directory of users—mapped to the OpenStack projects they can access—and an authentication system between the OpenStack services.

Horizon. Horizon is the web-based dashboard that allows administrators to manage compute, network, and storage resources in the data center and allocate those resources to multitenant users. Users can then create and destroy VMs in a self-service capacity, determine the networks on which those VMs communicate, and attach storage volumes to those VMs.

Swift. Swift provides object- and file-based storage in OpenStack. Swift provides redundant and scalable storage, with data replicated across distributed storage clusters. If a storage node fails, Swift will quickly replicate its content to other active nodes. Additional storage nodes can be added to the cluster with full horizontal scale. Oracle Solaris 11 supports Swift being hosted in a ZFS environment.

Ironic. Ironic provides bare-metal provisioning in an OpenStack cloud, as opposed to VMs that are handled by Nova. An Ironic driver has been written to take advantage of the Oracle Solaris Automated Installer, which handles multinode provisioning of Oracle Solaris 11 systems.

Heat. Heat provides application orchestration in the cloud, allowing administrators to describe multitier applications by defining a set of resources through a template. As a result, a self-service user can execute this orchestration and have the appropriate compute, network, and storage deployed in the appropriate order.

Modern Cloud Infrastructure with Oracle Enterprise OpenStack

Oracle Solaris 11 built in virtualization provides a highly efficient and scalable solution that sits at the core of that platform. With the inclusion of Kernel Zones, Oracle Solaris 11 provides a flexible, cost efficient, cloud ready solution perfect for the data center. Enhancements and new features include:

  • Secure Live Migration and OS version flexibility with Oracle Solaris Kernel Zones
  • Cloud ready: a core feature of the OpenStack distribution included in Oracle Solaris 11
  • Rapid adoption with support for Oracle Solaris 10 Zones on Oracle Solaris 11
  • Integration with the Oracle Solaris 11 Software Defined Networking
  • Read only security with Immutable Zones
  • Eliminate downtime with Live Reconfiguration of Zones
  • Enhanced mobility with Zones on Shared Storage
  • Simple to deploy and update enabled by tight integration into the Lifecycle Management system

Oracle Solaris combines the power of industry standard security features, unique security and anti-malware capabilities, and compliance management tools for low risk application deployments and cloud infrastructure. Oracle hardware systems and software in silicon provide the anti-malware trust anchors, accelerate cryptography, and help protect from memory attacks with ADI, NX, and SMEP.

Oracle Solaris:

  • Provides a more secure enterprise cloud
  • Provides a more secure application lifecycle
  • Provides a more compliance infrastructure
  • Provides a more secure application
  • Provides a more secure infrastructure
  • Is an assured and tested low risk platform

Oracle Solaris Software Integration

More Information:

Here's Your Oracle Solaris 11.3 List of Blog Posts:

Oracle Solaris 11.3 Blog List












07 June 2016

SQL Server 2016 Finally Released

SQL Server 2016 is Here!


SQL Server 2016 is the latest addition to Microsoft’s data platform, with a variety of new features and enhancements that deliver breakthrough performance, advanced security, and richer, integrated reporting and analytics capabilities. Built using the new rapid-release model, SQL Server 2016 incorporates many features introduced first in the cloud in Microsoft Azure SQL Database. Furthermore, SQL Server 2016 includes the capability to dynamically migrate historical data to the cloud.

SQL Server 2016 General Availability Announcement with Rohan Kumar 

Introducing Microsoft SQL Server 2016 leads you through the major changes in the data platform, whether you are using SQL Server technology on-premises or in the cloud, but it does not cover every new feature added to the platform. Instead, we explain key concepts and provide examples for the more significant features so that you can start experiencing their benefits firsthand.

SQL Server 2016 novelties

Faster queries

When users want data, they want it as fast as you can give it to them. Microsoft SQL Server 2016 includes several options for enabling faster queries. Memory-optimized tables now support even faster online transaction processing (OLTP) workloads, with better throughput as a result of new parallelized operations. For analytic workloads, you can take advantage of updateable, clustered columnstore indexes on memory-optimized tables to achieve queries that are up to one hundred times faster. Not only is the database engine better and faster in SQL Server 2016, but enhancements to the Analysis Services engine also deliver faster performance for both multidimensional and tabular models. The faster you can deliver data to your users, the faster they can use that data to make better decisions for your organization.

In-Memory OLTP enhancements

Introduced in SQL Server 2014, In-Memory OLTP helps speed up transactional workloads with high concurrency and too many latches by moving data from disk-based tables to memory-optimized tables and by natively compiling stored procedures.

In-Memory OLTP in SQL Server 2016 

In-memory OLTP can also help improve the performance of data warehouse staging by using nondurable, memory-optimized tables as staging tables. Although there were many good reasons to use memory-optimized tables in the first release of In-Memory OLTP, several limitations restricted the number of use cases for which In-memory OLTP was suitable. In this section, we describe the many enhancements that make it easier to put memory-optimized tables to good use.

Sql Server 2016 Evolution Part 1

Reviewing new features for memory-optimized tables

In SQL Server 2016, you can implement the following features in memory-optimized tables:
  • FOREIGN KEY constraints between memory-optimized tables, as long as the foreign key references a primary key.
  • CHECK constraints.
  • UNIQUE constraints.
  • Triggers (AFTER) for INSERT/UPDATE/DELETE operations, as long as you use WITH NATIVE_COMPILATION.
  • Columns with large object (LOB) types—varchar(max), nvarchar(max), and varbinary(max).
  • Collation using any code page supported by SQL Server.
  • Indexes for memory-optimized tables now support the following features:
  • UNIQUE indexes.
  • Index keys with character columns using any SQL Server collation.
  • NULLable index key columns.

Better security

SQL Server 2016 introduces three new principal security features—Always Encrypted, Row-Level Security, and dynamic data masking. While all these features are security related, each provides a different level of data protection within this latest version of the database platform. Throughout this chapter, we explore the uses of these features, how they work, and when they should be used to protect data in your SQL Server database.

Always Encrypted

Always Encrypted is a client-side encryption technology in which data is automatically encrypted not only when it is written but also when it is read by an approved application. Unlike Transparent Data Encryption, which encrypts the data on disk but allows the data to be read by any application that queries the data, Always Encrypted requires your client application to use an Always Encrypted–enabled driver to communicate with the database. By using this driver, the application securely transfers encrypted data to the database that can then be decrypted later only by an application that has access to the encryption key. Any other application querying the data can also retrieve the encrypted values, but that application cannot use the data without the encryption key, thereby rendering the data useless. Because of this encryption architecture, the SQL Server instance never sees the unencrypted version of the data. Note At this time, the only Always Encrypted–enabled drivers are the .NET Framework Data Provider for SQL Server, which requires installation of .NET Framework version 4.6 on the client computer, and the JDBC 6.0 driver. In this chapter, we refer to both of these drivers as the ADO.NET driver for simplicity.

Higher availability

In a world that is always online, maintaining uptime and streamlining maintenance operations for your mission-critical applications are more important than ever. In SQL Server 2016, the capabilities of the AlwaysOn Availability Group feature continue to evolve from previous versions, enabling you to protect data more easily and flexibly and with greater throughput to support modern storage systems and CPUs. Furthermore, AlwaysOn Availability Groups and AlwaysOn Failover Cluster Instances now have higher security, reliability, and scalability. By running SQL Server 2016 on Windows Server 2016, you have more options for better managing clusters and storage. In this chapter, we introduce the new features that you can use to deploy more robust high-availability solutions.

AlwaysOn Availability Groups

First introduced in SQL Server 2012 Enterprise Edition, the AlwaysOn Availability Groups feature provides data protection by sending transactions from the transaction log on the primary replica to one or more secondary replicas, a process that is conceptually similar to database mirroring. In SQL Server 2014, the significant enhancement to availability groups was the increase in the number of supported secondary replicas from three to eight. SQL Server 2016 includes a number of new enhancements that we explain in this section:

  •  AlwaysOn Basic Availability Groups
  •  Support for group Managed Service Accounts (gMSAs)
  •  Database-level failover
  •  Distributed Transaction Coordinator (DTC) support
  •  Load balancing for readable secondary replicas
  •  Up to three automatic failover targets
  •  Improved log transport performance

Improved database engine

In past releases of SQL Server, Microsoft has targeted specific areas for improvement. In SQL Server 2005, the storage engine was new. In SQL Server 2008, the emphasis was on server consolidation. Now, in SQL Server 2016, you can find enhanced functionality across the entire database engine. With Microsoft now managing more than one million SQL Server databases through its Database as a Service (DBaaS) offering—Microsoft Azure SQL Database—it is able to respond more quickly to opportunities to enhance the product and validate those enhancements comprehensively before adding features to the on-premises version of SQL Server. SQL Server 2016 is a beneficiary of this new development paradigm and includes many features that are already available in SQL Database. In this chapter, we explore a few of the key new features, which enable you to better manage growing data volumes and changing data systems, manage query performance, and reduce barriers to entry for hybrid cloud architectures.

Sql Server 2016 Evolution Part 2

SQL Server 2016 introduces a new hybrid feature called Stretch Database that combines the power of Azure SQL Database with an on-premises SQL Server instance to provide nearly bottomless storage at a significantly lower cost, plus enterprise-class security and near-zero management overhead. With Stretch Database, you can store cold, infrequently accessed data in Azure, usually with no changes to application code. All administration and security policies are still managed from the same local SQL Server database as before.

Understanding Stretch Database architecture

Enabling Stretch Database for a SQL Server 2016 table creates a new Stretch Database in Azure, an external data source in SQL Server, and a remote endpoint for the database, as shown in Figure 4-7. User logins query the stretch table in the local SQL Server database, and Stretch Database rewrites the query to run local and remote queries according to the locality of the data. Because only system processes can access the external data source and the remote endpoint, user queries cannot be issued directly against the remote database.

Security and Stretch Database

One of the biggest concerns about cloud computing is the security of data leaving an organization’s data center. In addition to the world-class physical security provided at Azure data centers, Stretch Database includes several additional security measures. If required, you have the option to enable Transparent Data Encryption to provide encryption at rest. All traffic into and out of the remote database is encrypted and certificate validation is mandatory. This ensures that data never leaves SQL Server in plain text and the target in Azure is always verified.

Broader data access

As the cost to store data continues to drop and the number of data formats commonly used by applications continues to change, you need the ability both to manage access to historical data relationally and to seamlessly integrate relational data with semistructured and unstructured data. SQL Server 2016 includes several new features that support this evolving environment by providing access to a broader variety of data. The introduction of temporal tables enables you to maintain historical data in the database, to transparently manage data changes, and to easily retrieve data values at a particular point in time. In addition, SQL Server allows you to import JavaScript Object Notation (JSON) data into relational storage, export relational data as JSON structures, and even to parse, aggregate, or filter JSON data. For scalable integration of relational data with semistructured data in Hadoop or Azure storage, you can take advantage of SQL Server PolyBase, which is no longer limited to the massively parallel computing environment that it was when introduced in SQL Server 2014.

Temporal data

A common challenge with data management is deciding how to handle changes to the data. At a minimum, you need an easy way to resolve an accidental change without resorting to a database restore. Sometimes you must be able to provide an audit trail to document how a row changed over time and who changed it. If you have a data warehouse, you might need to track historical changes for slowly changing dimensions. Or you might need to perform a trend analysis to compare values for a category at different points in time or find the value of a business metric at a specific point in time.
To address these various needs for handling changes to data, SQL Server 2016 now supports temporal tables, which were introduced as a new standard in ANSI SQL 2011. In addition, Transact-SQL has been extended to support the creation of temporal tables and the querying of these tables relative to a specific point in time.

A temporal table allows you to find the state of data at any point in time. When you create a temporal table, the system actually creates two tables. One table is the current table (also known as the temporal table), and the other is the history table. The history table is created as a page-compressed table by default to reduce storage utilization. As data changes in the current table, the database engine stores a copy of the data as it was prior to the change in the history table.

The use of temporal tables has a few limitations. First, system versioning and the FileTable and FILESTREAM features are incompatible. Second, you cannot use CASCADE options when a temporal table is the referencing table in a foreign-key relationship. Last, you cannot use INSTEAD OF triggers on the current or history table, although you can use AFTER triggers on the current table.


PolyBase was introduced in SQL Server 2014 as an interface exclusively for Microsoft Analytics Platform System (APS; formerly known as Parallel Data Warehouse), with which you could access data stored in Hadoop Distributed File System (HDFS) by using SQL syntax in queries.
In SQL Server 2016, you can now use PolyBase to query data in Hadoop or Azure Blob Storage and combine the results with relational data stored in SQL Server. To achieve optimal performance, PolyBase can dynamically create columnstore tables, parallelize data extraction from Hadoop and Azure sources, or push computations on Hadoop-based data to Hadoop clusters as necessary. After you install the PolyBase service and configure PolyBase data objects, your users and applications can access data from nonrelational sources without any special knowledge about Hadoop or blob storage.
Installing PolyBase

SQL Server 2016 - Polybase

You can install only one instance of PolyBase on a single server, which must also have a SQL Server instance installed because the PolyBase installation process adds the following three databases: DWConfiguration, DWDiagnostics, and DWQueue. The installation process also adds the PolyBase engine service and PolyBase data movement service to the server.
Before you can install PolyBase, your computer must meet the following requirements:
  • Installed software: Microsoft .NET Framework 4.5 and Oracle Java SE RunTime Environment (JRE) version 7.51 or higher (64-bit)
  • Minimum memory: 4 GB
  • Minimum hard-disk space: 2 GB
  • TCP/IP connectivity enabled
Polybase: Hadoop Integration in SQL Server PDW V2

To install PolyBase by using the SQL Server Installation Wizard, select PolyBase Query Service For External Data on the Feature Selection page. Then, on the Server Configuration page, you must configure the SQL Server PolyBase engine service and the SQL Server PolyBase data movement service to run under the same account. (If you create a PolyBase scale-out group, you must use the same service account across all instances.) Next, on the PolyBase Configuration page, you specify whether your SQL Server instance is a standalone PolyBase instance or part of a PolyBase scale-out group. As we describe later in this chapter, when you configure a PolyBase scale-out group, you specify whether the current instance is a head node or a compute node. Last, you define a range with a minimum of six ports to allocate to PolyBase.

Scaling out with PolyBase

Because data sets can become quite large in Hadoop or blob storage, you can create a PolyBase scale-out group, as shown in Figure 5-9, to improve performance.

A PolyBase scale-out group has one head node and one or more compute nodes. The head node consists of the SQL Server database engine, the PolyBase engine service, and the PolyBase data movement service, whereas each compute node consists of a database engine and data movement service. The head node receives the PolyBase queries, distributes the work involving external tables to the data movement service on the available compute nodes, receives the results from each compute node, finalizes the results in the database engine, and then returns the results to the requesting client. The data movement service on the head
node and compute nodes is responsible for transferring data between the external data sources and
SQL Server and between the SQL Server instances on the head and compute nodes.

More analytics

Better and faster analytics capabilities have been built into SQL Server 2016. Enhancements to tabular models provide greater flexibility for the design of models, and an array of new tools helps you develop solutions more quickly and easily. As an option in SQL Server 2016, you can now use SQL Server R Services to build secure, advanced-analytics solutions at enterprise scale. By using R Services, you can explore data and build predictive models by using R functions in-database. You can then deploy these models for production use in applications and reporting tools.

Tabular enhancements

In general, tabular models are relatively easy to develop in SQL Server Analysis Services. You can build such a solution directly from a wide array of sources in their native state without having to create a set of tables as a star schema in a relational database. You can then see the results of your modeling within the design environment. However, there are some inherent limitations in the scalability and complexity of the solutions you can build. In the latest release of SQL Server, some of these limitations have been removed to better support enterprise requirements. In addition, enhancements to the modeling process make controlling the behavior and content of your model easier. In this section, we review the following enhancements that help you build better analytics solutions in SQL Server 2016:
  •  More data sources accessible in DirectQuery mode
  •  Choice of using all, some, or no data during modeling in DirectQuery mode
  •  Calculated tables
  •  Bidirectional cross-filtering
  •  Formula bar enhancements
  •  New Data Analysis Expressions (DAX) functions
  •  Using DAX variables
R integration

R is a popular open-source programming language used by data scientists, statisticians, and data analysts for advanced analytics, data exploration, and machine learning. Despite its popularity, the use of R in an enterprise environment can be challenging. Many tools for R operate in a single-threaded, memory-bound desktop environment, which puts constraints on the volume of data that you can analyze. In addition, moving sensitive data from a server environment to the desktop removes it from the security controls built into the database.

R Services in SQL Server 2016 

SQL Server R Services, the result of Microsoft’s acquisition in 2015 of Revolution Analytics, resolves these challenges by integrating a unique R distribution into the SQL Server platform. You can execute R code directly in a SQL Server database when using R Services (In-Database) and reuse the code in another platform, such as Hadoop. In addition, the workload shifts from the desktop to the server and maintains the necessary levels of security for your data. In Enterprise Edition, R Services performs multithreaded, multicore, and parallelized multiprocessor computations at high speed. Using R Services, you can build intelligent, predictive applications that you can easily deploy to production.

Installing and configuring R Services

To use SQL Server R Services, you must install a collection of components to prepare a SQL Server instance to support the R distribution. In addition, each client workstation requires an installation of the R distribution and libraries specific to R Services.

Server configuration

R Services is available in the Standard, Developer, and Enterprise editions of SQL Server 2016 or in Express Edition with Advanced Services. Only the Enterprise edition supports execution of R packages in a high-performance, parallel architecture. In the server environment, you install one of the following components from the SQL Server installation media:
  • R Services (In-Database) A database-engine feature that configures the database service to use R jobs and installs extensions to support external scripts and processes. It also downloads Microsoft R Open (MRO), an open-source R distribution. This feature requires you to have a default or named instance of SQL Server 2016.
  • R Services (Standalone) A standalone component that does not require a database-engine instance and is available only in the Enterprise edition of SQL Server 2016. It includes enhanced R packages and connectivity tools from Revolution Analytics and open-source R tools and base packages. Selection of this component also downloads and installs MRO.

Better reporting

For report developers, Reporting Services in SQL Server 2016 has a more modern development environment, two new data visualizations, and improved parameter layout options. In addition, it includes a new development environment to support mobile reports. Users also benefit from a new web portal that supports modern web browsers and mobile access to reports. In this chapter, we’ll explore these new features in detail.

What's New in Microsoft SQL Server 2016 Reporting 

Report content types

This release of Reporting Services includes both enhanced and new report content types:
  •  Paginated reports Paginated reports are the traditional content type for which Reporting Services is especially well suited. You use this content type when you need precise control over the layout, appearance, and behavior of each element in your report. Users can view a paginated report online, export it to another format, or receive it on a scheduled basis by subscribing to the report. A paginated report can consist of a single page or hundreds of pages, based on the data set associated with the report. The need for this type of report continues to persist in most organizations, as well as the other report content types that are now available in the Microsoft reporting platform.
  •  Mobile reports In early 2015, Microsoft acquired Datazen Software to make it easier to deploy reports to mobile devices, regardless of operating system and form factor. This content type is best when you need touch-responsive and easy-to-read reports that are displayed on smaller screens, communicate key metrics effectively at a glance, and support drill-through to view supporting details. In SQL Server 2016, users can view both paginated and mobile reports through the web portal interface of the on-premises report server.
  •  Key performance indicators (KPIs) A KPI is a simple type of report content that you can add to the report server to display metrics and trends at a glance. This content type uses colors to indicate progress toward a goal and an optional visualization to show how values trend over time.

Improved Azure SQL Database

Microsoft Azure SQL Database was one of the first cloud services to offer a secure, robust, and flexible database platform to host applications of all types and sizes. When it was introduced, SQL Database had only a small subset of the features available in the SQL Server database engine. With the introduction of version V12 and features such as elastic database pools, SQL Database is now an enterprise-class platform-as-a-service (PaaS) offering. Furthermore, its rapid development cycle is beneficial to both SQL Database and its on-premises counterpart. By integrating new features into SQL Database ahead of SQL Server, the development team can take advantage of a full testing and telemetry cycle, at scale, that allows them to add features to both products much faster. In fact, several of the features in SQL Server 2016 described in earlier chapters, such as Always Encrypted and Row-Level Security, result from the rapid development cycle of SQL Database.

Introduction to SQL Database

Microsoft Azure SQL Database is one of many PaaS offerings available from Microsoft. It was introduced in March 2009 as a relational database-as-a-service called SQL Azure, but it had a limited feature set and data volume restrictions that were useful only for very specific types of small applications. Since then, SQL Database has evolved to attain greater parity with its on-premises predecessor, SQL Server. If you have yet to implement a cloud strategy for data management because of the initial limitations of SQL Database, now is a good time to become familiar with its latest capabilities and discover how best to start integrating it into your technical infrastructure.

Elastic database features

Microsoft has introduced elastic database features into SQL Database to simplify the implementation and management of software-as-a-service (SaaS) solutions. To optimize and simplify the management of your application, use one or more of the following features:
  •  Elastic scale This feature allows you to grow and shrink the capacity of your database to accommodate different application requirements. One way to manage elasticity is to partition your data across a number of identically structured databases by using a technique called sharding. You use the elastic database tools to easily implement sharding in your database.
  •  Elastic database pool Rather than explicitly allocate DTUs to a SQL Database, you can use an elastic database pool to allocate a common pool of DTU resources to share across multiple databases. That way you can support multiple types of workloads on demand without monitoring your databases individually for changes in performance requirements that necessitate intervention.
  •  Elastic database jobs You use an elastic database job to execute a T-SQL script against all databases in an elastic database pool to simplify administration for repetitive tasks such as rebuilding indexes. SQL Database automatically scales your script and applies built-in retry logic when necessary.
  •  Elastic query When you need to combine data from multiple databases, you can create a single connection string and execute a single query. SQL Database then aggregates the data into one result set.

Managing elastic scale

Sharding is not a new concept, but it has traditionally been challenging to implement because it often requires custom code and adds complexity to the application layer. Elastic database tools are available to simplify creating and managing sharded applications in SQL Database by using an elastic database client library or the Split-Merge service. These tools are useful whether you distribute your database across multiple shards or implement one shard per end customer, as shown in Figure 8-7.

You should consider sharding your application if either of the following conditions apply:
  •  The size of application data exceeds the limitations of SQL Database.
  •  Different shards of the database must reside in different geographies for compliance, performance, or geopolitical reasons.

Where You Can Get Additional Information

Below are some additional resources that you can use to find out more information about SQL Server 2016.

SQL Server 2016 Early Access Web Site: https://www.microsoft.com/en/server-cloud/products/sql-server-2016/

SQL Server 2016 data sheet: http://download.microsoft.com/download/F/D/3/FD33C34D-3B65-4DA9-8A9F-0B456656DE3B/SQL_Server_2016_datasheet.pdf


SQL Server 2016 release notes: https://msdn.microsoft.com/en-US/library/dn876712.aspx

What’s new in SQL Server, September Update: https://msdn.microsoft.com/en-US/library/bb500435.aspx