Materials for certain CASCON 2018 workshops are now available from HERE

Workshops at CASCON 2018 will provide a forum to present, discuss, and debate issues, problems, ideas, technology gaps, work-in-progress, and/or directions. The format of a workshop may include position papers, expert panels, hands-on exercises, and discussions. All submitted workshop proposals require long abstracts of 1500 words maximum, typically including the abstract, rationale, technical/research scope, organizers, workshop format, and expected outcomes. Long abstracts of accepted workshops will be included in the conference proceedings published by CASCON and included in the ACM Digital Library.

Congratulations to the Workshop Chairs!

Accepted Workshops

Thank you for your submissions. See you all at CASCON 2018.

Monday PM workshops run from 03:15 to 05:15 PM

Building AI models using IBM Watson Studio (HandsOn, in Conf Center 2)

Chair(s): Sarah Packowski , Wendy Switzer
Theme: Cognitive Computing
Room: Conf Center 2
Format: HandsOn
Level: Beginner
Prereq: 1. You MUST bring your own laptops for the hands-on exercise
2. Set up Watson Studio and Watson Knowledge Catalog on IBM Cloud: https://dataplatform.cloud.ibm.com/registration/stepone
Description: In this workshop you will train AI models that process different types of sample input data, including: structured data (tabular) and unstructured data (images and sound.) Then you will see how to integrate those models into fun, sample apps that solve everyday challenges. With our help, you will create models in different ways using a variety of tools in IBM Watson Studio.

About Watson Studio: IBM Watson Studio provides a range of tools to help you train AI models - from graphical tools that guide you, step-by-step, in choosing machine learning algorithms to notebooks where you can construct complex neural networks by hand. Watson Studio simplifies AI development, whether you are looking for tools to make learning AI easier, tools to rapidly prototype your AI inventions, or a cost-effective, powerful platform for AI research or enterprise AI solutions. See more: https://medium.com/ibm-watson/introducing-ibm-watson-studio-e93638f0bb47

View Workshop Detail

Build a cognitive serverless Slack app with IBM Cloud Functions & IBM Watson API (HandsOn, in Holly-Butternut)

Chair(s): Serjik Dikaleh , Eric Charpentier , John Liu , Neil DeLima , Vince Yuen
Theme: Cloud Computing
Room: Holly-Butternut
Format: HandsOn
Level: Intermediate
Prereq: - Some JavaScript and Node.js knowledge
- Sign up for an IBM Cloud and a Slack account
Description: Slack is an easy use collaboration tool to serve as a digital communication hub in many companies and teams. Based on Apache OpenWhisk, IBM Cloud Functions is a functions-as-a-service (FaaS) programming platform for developing lightweight code that scalably executes on demand. 
 
This workshop will teach the audience to build a Slack app by implementing several serverless IBM cloud functions and integrating them into slack channels through Slack Events API. The application will also leverage IBM Watson APIs to have a Slack chatbot to chat with users to provide cognitive services within a demonstrated knowledge domain.

View Workshop Detail

Build better APIs with the next generation of API testing and monitoring (HandsOn, in Orchid)

Chair(s): ivy ho , JJ Tang , JISOO LEE , Amirali Jafarian , Peter El-koreh
Theme: Data and Analytics
Room: Orchid
Format: HandsOn
Level: Beginner
Prereq: None. Nice to have some knowledge of what APIs are.

May need own laptop with https://www.ibm.com/cloud/api-connect/api-test signed up before joining the workshop.
Description: The proliferation of APIs across all facets of life continued to explode and grow. The quality of the APIs and the data retrieval become a critical factor. In this workshop, we will walk you through a new no-code way of validating the API quality. How you can validate the API payload accuracy. How you can get new insights into API data from real business use-cases in different sectors. You will witness the innovation of the API test and monitor approach in this workshop .

View Workshop Detail

Come learn how to deploy Open Liberty applications using Docker, Kubernetes, Helm and MicroProfile! (HandsOn, in Violet)

Chair(s): Arthur De Magalhaes , Leo Christy Jesuraj
Theme: Cloud Computing
Room: Violet
Format: HandsOn
Level: Beginner
Prereq:
Description: Application modernization is in everyone's mind - but what environment do you migrate your legacy application into? Can that environment also host your new cloud-native applications?

In this hands-on lab you'll learn about how to leverage Open Liberty's Docker container from Open Source to package your applications (legacy or cloud-native) into a container and store them in a private, on-premises Docker registry.

You will then use IBM's Open Source Helm charts to deploy these applications (plus a database!) into Kubernetes using production-grade settings such as auto-scaling and health monitoring.

Lastly, you will see how MicroProfile OpenAPI can help your containerized microservices expose their REST APIs and enable an internal API economy between legacy and cloud-native applications.

The environment for this lab will be IBM Cloud Private - a production-ready kubernetes platform.

View Workshop Detail

Best Practices and Lessons Learned in Microservices (Panel, in Evergreen)

Chair(s): Julia Rubin , Yingying Wang , Harshavardhan Kadiyala , John Steinbacher , Tony Erwin
Theme: Cloud Computing
Room: Evergreen
Format: Panel
Level: Beginner
Prereq:
Description: Microservice-based architecture is an approach to developing a single application as a suite of independent services. The services run in separate processes and communicate with each other via lightweight language-agnostic protocols, such as HTTP REST. The services are split following business capabilities; each service has a fully automated pipeline and is independently deployable.

Microservices aim at shortening the development lifecycle while improving the quality, availability, and scalability of applications at runtime. From the development perspective, cutting one big application into small independent pieces reinforces the component abstraction, and makes it easier for the system to maintain clear boundaries between components. At runtime, microservices can be individually scaled by adding more instances of those microservices that experience increasing traffic.

Due to these advantages, microservice-based architectures are now becoming increasingly popular in industry. Examples of companies that have been using microservices include Amazon, Netflix, IBM, Uber, LinkedIn, Groupon, and eBay.

Yet, adopting microservice-based architectures and implementing it “right” is no a trivial endeavour. Just “jumping on the microservices trend” and expecting that the transition itself, together with the adoption of advanced technology, such as Docker and Kubernetes, will allow companies to achieve significant improvements is a false belief.

In this workshop, we intend to explore best practices, lessons learned, and technical challenges practitioners face when adopting and implementing microservices. These include considerations for identifying the right service granularity and topology, issues related to synchronization and constancy, security of microservices, performance debugging, efficient monitoring and troubleshooting, and more.

Our goal is to gather researchers and practitioners interested in exchanging ideas on the topic. For practitioners, the workshop will provide venue to learn from each other, borrow successful ideas, and avoid common mistakes. For researchers, a description of current practices and challenges practitioners face can inspire novel software engineering methods and techniques.


The workshop will be structured as a series panel discussions and invited talks by participants from industry and academia.

View Workshop Detail

Data-driven medicine: promise and challenges (Panel, in Primrose)

Chair(s): TOMAS TOKAR , Igor Jurisica
Theme: Data and Analytics
Room: Primrose
Format: Panel
Level: Beginner
Prereq:
Description: In the last few years, use of machine learning (ML) and artificial intelligence (AI) in the sector of healthcare started to gain broader acceptance. These technologies may revolutionize medicine by improving diagnostic accuracy and increasing therapeutic efficiency. The successful application of these technologies requires a constant circulation of a large amounts of data from patients, through healthcare professionals, to scientists and software developers. The data are not only required for the initial training and testing of the algorithms, they are also essential for monitoring algorithms’ performance once deployed in a clinical practice. Nowadays medical data may include range of modalities. These include genetic profiles, medical imaging records, data from wearable devices, clinical findings and various socioeconomic characteristics of patients. This poses several technical and ethical challenges, addressing which requires introduction of a novel technologies and development of a new healthcare policies. This is only possible through broad communication between the medical professionals, computer science experts and information privacy specialists.

View Workshop Detail

13th Workshop on Challenges For Parallel Computing (Speakers, in Conf Center 1)

Chair(s): Jeeva Paudel , wael yehia , Jeremy Bradbury
Theme: Systems
Room: Conf Center 1
Format: Speakers
Level: Beginner
Prereq:
Description: Parallel computing has expanded significantly over the past decade and now includes the development of applications for multi-core systems, distributed systems and heterogeneous systems. The goals of this workshop are to bring together different groups from the parallel community (application developers, language developers, compiler and tools developers, system architects and academic researchers) to explore the current challenges that parallel computing faces and present ideas on how to deal with these challenges.

View Workshop Detail

The 3rd International Workshop on Dew Computing (Speakers, in Elm2)

Chair(s): Yingwei Wang , Karolj Skala
Theme: Cloud Computing
Room: Elm2
Format: Speakers
Level: Beginner
Prereq: None
Description: DEWCOM is an annual international workshop on dew computing. The first one, DEWCOM 2016, was held in Charlottetown, Canada. The second one, DEWCOM 2017, was held in Opatija, Croatia. The third one, DEWCOM 2018, is generously sponsored by IBM Centre for Advanced Studies and CASCON 2018 and will be held together with CASCON 2018. The details of DEWCOM 2018 can be found in http://www.dewcomputing.org/index.php/dewcom-2018/.

Dew computing is a new post-cloud computing model appeared in 2015. While cloud computing uses centralized servers to provide various services, dew computing uses on-premises computers to provide decentralized, cloud-friendly, and collaborative micro services to end-users.

Dew computing is an on-premises computer software-hardware organization paradigm in the cloud computing environment, which does not contradict with cloud computing, does not replace cloud computing, but it is complementary to cloud computing. The key features of dew computing are that on-premises computers provide functionality independent of cloud services and they also collaborate with cloud services. Briefly speaking, dew computing is a better way of using local computers in the age of cloud computing.

In this workshop, research progress in dew computing will be presented. Ideas and future directions will be discussed.

This workshop will have 5 sessions. The schedule of this workshop is: Session 1: Oct. 29, 3:15 – 5:15pm. Session 2: Oct. 30, 8:30 – 10:00am. Session 3: Oct. 30, 10:15am. – 12:00pm. Session 4: Oct.30, 1:00 – 3:00pm. Session 5: 3:15 – 5:15pm. Here we introduce the content of the first session.

This session will include “Dew Computing Tutorial” and a presentation: “Dewblock: A Blockchain System Based on Dew Computing.”
In the tutorial, we will focus on the following aspects: What is dew computing? What are the features of dew computing? Where can dew computing be applied to?

The presentation is about an application of dew computing to the blockchain technology. Blockchain is great and has huge potentials, but the size of a blockchain is always increasing. This will eventually cause problems for the use of blockchains. In this presentation, a new kind of blockchain system based on dew computing, Dewblock, will be introduced. The major feature of this new blockchain is that the data size of a client is very small and the features of a full node are still kept. This approach opens the door for the blockchain technology to be widely used in personal computers and mobile devices.

View Workshop Detail

IBM Academic Skills Academy - Syllabi and Other Things (Speakers, in Jasmine)

Chair(s): Stephen Perelgut , Dennis Buttera , Colette Lacroix , Lila Adamec
Theme: Other
Room: Jasmine
Format: Speakers
Level: Intermediate
Prereq: This workshop is intended primarily for academic faculty and administrators
Description: IBM has initiated a program to "Teach the Teacher" how to use the latest technologies. This session details 4 of the most requested topics, giving the complete syllabus for the 40hr course as well as highlights from the available badges for: Explorer, Mastery and Instructor.

Attendees will leave with a clear idea of what they can expect from the Skills Academy program and how they can learn materials to pass on to others.

Topics will include:
- Business Intelligence Analytics
- Mobile Application Development and IoT
- Blockchain and Design Thinking for Blockchain
- Quantum Computing

View Workshop Detail

Tuesday PM workshops run from 03:15 to 05:15 PM

Blockchain Fundamentals and Development Platforms (HandsOn, in Conf Center 2)

Chair(s): Omar Badreddin
Theme: Security
Room: Conf Center 2
Format: HandsOn
Level: Beginner
Prereq: No required prerequisites. All required software is available in the cloud. Any web browser will be sufficient to perform the development tasks.
Users MUST bring their own laptops for the hands-on exercise.
Description: Blockchain is an emerging computing and development platform. It is a new paradigm that aims at empowering peers and eliminate the need for central authenticating authority. In essence, blockchain has introduced a novel level of distributed sovereignty. Crypto currency is one prominent outcome of this new computing paradigm that has gained broad attention. However, Blockchain paradigm has demonstrated broader potential impacts in many disciplines including secure software engineering, supply chain, banking, and peer to peer commerce.

This half day workshop will give participants a brief background on the technology along with hands on practice on development using some prominent open source blockchain platforms. The hands-on exercises will be led by experienced Blockchain developer.

The first talk in the workshop will introduce the fundamental Blockchain concepts. The next talk will provide an overview of current and emerging impacts of Blockchain technologies covering many disciplines and industries, including financial industry, supply chains, authentication and security, as well as the recent emergence of blockchain based distributed social networks.

After the first two talks, participants will engage with guided hands-on exercises to develop a basic blockchain application. Participants will be given the required software and will also be made available online.

By the end of the workshop, participants are expected to have gained in-depth understanding of the emerging Blockchain technology and its applications. Participants will also gain knowledge and skills on existing blockchain development platforms. Therefore, this workshop is ideal for researchers and practitioners alike who are considering using Blockchain in their research or work, as well as middle technical managers who want to gain understanding on how Blockchains and the emergence of distributed sovereignty may impact their current line of businesses. The workshop is useful for educators who wish to introduce Blockchain in their undergraduate and graduate courses.

View Workshop Detail

Modernize digital applications with Microservices management using the Istio service mesh (HandsOn, in Holly-Butternut)

Chair(s): Ozair Sheikh , Serjik Dikaleh , Dharmesh Mistry , Darren Pape , Chris Felix
Theme: Cloud Computing
Room: Holly-Butternut
Format: HandsOn
Level: Intermediate
Prereq: - Intermediate Kubernetes knowledge
- A free trial IBM Cloud (Bluemix) account
- Minimal familiarity with Linux command line
Description: Digital solutions are being built on modernized enterprise platforms deployed on cloud infrastructure and managed using container platforms. Foundational infrastructure capabilities such as load balancing and routing, previously available as software are now being provided as part of the underlying cloud platform. When designing your next generation architecture, its integral to understand the capabilities available from the cloud platform versus acquiring / developing it with software. For example, load balancing and automatic scaling are features that are built-into container orchestration platforms such as Kubernetes; therefore, you should not expect your applications to develop these capabilities, rather write your applications in a manner that allows you to embrace the container platform.

These key application design principals are based on API / Microservices architecture, where business functions are packaged and deployed within containers and communicate with each other using API interfaces. As the number of microservices grow, the need to manage the interactions and provide key runtime capabilities becomes a critical requirement for success … lets explore why the service mesh is the right architecture for microservices-based applications.

A service mesh is an infrastructure layer for controlling container (ie microservice-to-microservice) traffic in microservices-based applications. Each container (ie microservice) is deployed together with a separate “sidecar” proxy, which interacts with a “control plane”, enforcing access between microservices. The service mesh provides a clear boundary between runtime operations and microservices functionality. It standardize the runtime operations using a declarative approach, so you can write policies to enforce runtime behaviour without developing any code. For example, lets explore the circuit breaker pattern. This pattern helps prevent failure for your entire application when a single service / component is unresponsive. Netflix Hystrix is a popular library used within Java applications to provide circuit breaker functionality. The challenge with using a shared library is that it gets embedded in code and becomes difficult to manage when code changes need to be made; instead, using an out-of-process proxy (ie sidecar) allows your microservice to add circuit breaker capability without modifying your application.

Istio is an open community project that implements the service mesh architecture. It is built on top of Kubernetes and provides an additional runtime layer that adds scalability, routing, A/B testing and more. It allows you to inject a “side proxy” into an existing kubernetes pod without changing your application, reducing the friction for adoption. You automatically gain the benefits of telemetry, security and circuit breaking without writing code or modifying configuration.

In this workshop, you will learn how to deploy an single-page application (SPA) built with API/Microservices design principals into the Istio service mesh. You will get hands-on experience in configuring Istio-based policies to manage microservices interaction (ie service routing) and improve infrastructure resiliency (ie circuit breaker) without writing application code. The overall application resiliency is enhanced when you deploy your application within the Istio service mesh.

View Workshop Detail

Hands-On: Easy Microservices Application Development with Microclimate (HandsOn, in Orchid)

Chair(s): Elson Yuen , Eric Peters , Rajiv Senthilnathan , Maysun Jamil Faisal , Steven Hung
Theme: Cloud Computing
Room: Orchid
Format: HandsOn
Level: Beginner
Prereq: Basic knowledge on Java and JavaScript is recommended.
Description: Microclimate is a brand-new, cloud native development environment that offers a complete, end-to-end development experience for Microservices. Since Microclimate has been designed with a focus on containerization, it can run anywhere from your local laptop, to an IBM Cloud private cluster.

With Microclimate, you can create or import Java, Node.js, or Swift applications into the development environment, and using any editor of your choosing, you can quickly start development on your application in a containerized environment. Through a process called Rapid Iteration, Microclimate will quickly detect any changes that occur in your project and determine the minimal and best course of action to update your application. From there, using our integrated DevOps pipeline, you can deploy your application with Jenkins to a live ICP cluster. With these features, Microclimate offers a fully featured development experience that many other environments don't offer today.

During the hands-on workshop, we will give you an introduction to Microclimate, starting from product installation to write Microservices applications to run on Microclimate in a Docker environment. You will get hands on experiences to create new applications and import existing applications into Microclimate. For developers, a crucial part of the development cycle is the ability to quickly develop and test applications changes on a running application. The develop-deploy-test-repeat cycle must be as short as possible in order to prevent lost developer productivity due to deployment downtime. You will be given the opportunity to experience this rapid iterative development support by developing Java and JavaScript applications in this workshop.

Finally, during the workshop we will introduce the integrated DevOps pipeline functions provided that allows you get into production fast with a preconfigured DevOps pipeline and deploy application to IBM Cloud Private (ICP). We will also show you the diagnostic services that helps you to do problem determination in production.

View Workshop Detail

IBM Voice Agent with Watson (HandsOn, in Violet)

Chair(s): Alice Yeung , Rick Chen , Philip Kurowski , Trevor Crawford , Meswan Bhaugeerutty
Theme: Cognitive Computing
Room: Violet
Format: HandsOn
Level: Beginner
Prereq: Attendees need to create trial Twilio and IBM Cloud accounts during the workshop. The accounts are free and no credit card information is needed.
Description: Cognitive chat bots are changing the way businesses are interacting with their customers. Whether embedded in a web page, or talking to users via a mobile application, Watson powered cognitive bots can resolve queries quickly and efficiently.

To better leverage this powerful technology, businesses can use Voice Agent with Watson on IBM Cloud to quickly build Watson powered chat bots (voice agents) and connect them to the telephone network.

Through connecting Speech to Text, Text to Speech, and Watson Assistant, voice agents can identify what a caller is saying and to respond back in real time. As Watson Assistant can process natural language, it is able to converse with callers using complete sentences, helping to improve the experience for callers.

Voice agents can act as a self-service solution to solve the most common calls entering call centers. By answering the main volume of calls with voice agents, businesses can resolve calls more effectively. This results in a reduction to the volume of calls sent to human agents.

View Workshop Detail

Third Annual Workshop on Data-Driven Knowledge Mobilization (Panel, in Jasmine)

Chair(s): Kelly Lyons , Eleni Stroulia , Marcellus Mindel
Theme: Data and Analytics
Room: Jasmine
Format: Panel
Level: Beginner
Prereq: None
Description: Knowledge mobilization and translation describes the process of moving knowledge from research and development (R&D) labs into environments where it can be put to use. There is increasing interest in understanding mechanisms for knowledge mobilization, specifically with respect to academia and industry collaborations. At the same time, the number of available datasets and accessible analysis tools is growing.

Building on the discussions and results presented at previous workshops, the third annual workshop on data-driven knowledge mobilization will bring together researchers, students, and industry partners to present results and discuss challenges associated the analysis of datasets associated with knowledge mobilization. In order to understand the processes of knowledge mobilization, we need access to certain datasets and specific analyses techniques. We will present details of curated datasets and analysis techniques that support analysis of individuals and resources, expertise and work activities, and work outputs and innovations.

The goals of the workshop are to bring participants together to share results and outcomes and to discuss challenges and future directions. In this workshop, we will report on research underway as part of a Strategic Partnership Project on Data-Driven Knowledge Mobilization, Translation, and Innovation. The Strategic Partnership Project is in its final year. Since the start of the project, several people have participated in the research including six investigators, two postdoctoral fellows, eight Ph.D. students, five masters students, and fourteen industrial and partner scientists. One of the goals of the project is to produce a repository of datasets and analysis tools. The theme of this 3rd annual workshop focuses on tools to enable the analysis of datasets that will help in understanding the processes of knowledge mobilization.

There will be five presentations by researchers and students involved in the project followed by a panel discussion.

View Workshop Detail

The 10th CASCON Workshop on Cloud Computing (Speakers, in Conf Center 1)

Chair(s): Marin Litoiu , Joe Wigglesworth
Theme: Cloud Computing
Room: Conf Center 1
Format: Speakers
Level: Intermediate
Prereq:
Description: The goal of the workshop was to bring together researchers and practitioners from government, industry and academia to present and share the best practices and research agendas at the intersection of Cloud Computing and Internet of Things: development, deployment, runtime management, quality of services and runtime models. We particularly focused on several main topics: cloud requirements for Internet of Things, deployment and adaptive runtime management, cognitive capabilities, security and privacy. DevOps plays an important role in the IoT and cloud ecosystems, providing the mechanisms that enable agile development and operations and it was a topic of the workshop. Application domains such as smart buildings and smart cities were illustrated.

This half-day workshop consisted in presentations and a panel. The presentations were structured along the main themes of the workshop. To encourage discussion and provide a more open discussion and perspective, we included a panel where industry and academic experts presented their visions and answered questions from the audience.

View Workshop Detail

CASCON Workshop on Developing Big Data Applications and Services - BDAS 2018 (Speakers, in Elm2)

Chair(s): Darlan Arruda , Nazim H. Madhavji , Colin Taylor
Theme: Other
Room: Elm2
Format: Speakers
Level: Intermediate
Prereq: To make the most of this workshop, participants must have a working (general) knowledge of software engineering, software development, and Big Data.

Nice to have: Industry experience in developing Big Data Applications and Services.
Description: 1. Background
Research from Gartner (2015) indicates that, in 2017, 60% of Big Data projects failed or did not provide the expected benefits. However, in November 2017, Nick Heudecker, a Gartner analyst, posted in his twitter account that they were too conservative. The Big Data project failure rate is now close to 85%. The reasons are not only related to technology itself. It is a mix of environmental, technological and managerial problems. Some of the reasons for Big Data projects failure are: At the project level: missing link to business objectives, lacking big data skills, relying too much on the data, failing to convince executives, and poor planning; At the technical level: Rapid technology changes, difficulty in selecting Big Data technologies to address the systems and project requirements, complex integration between new and old systems, computation of intensive analytics, and the necessity of high scalability, availability and reliability, to name a few. Further, our previous study has shown that there is approximately a 80:20 split in the industry focus in favour of “algorithms for analytics” and “infrastructure”, thereby shortchanging the aspects of creating and evolving “applications” and “services” concerned with Big Data.

2. Importance
The emerging data on project challenges or failure should be of immense concern to the Big Data software community. It calls for meeting of the minds to deliberate about, and share experiences concerning the development of Big Data applications and services. Both industry and academia representation are needed to cut through the barriers facing the community today. Working in isolation may prolong the pain and agony of challenges faced in the Big Data software community. In turn, the society at large is deprived of the potential benefits of Big Data applications and services. The proposed workshop comes at a critical juncture in the fast-emerging field of Big Data applications and services development. The workshop aims to be a catalyst in the movement on Big Data applications building and services creation. It will form a platform for participants, from both practice and research, to deliberate on, and achieve a deeper understanding of, the different activities, methods, tools, processes, system artefacts, constraints, conditions, etc., involved in Big Data projects.

3. Purpose and Interest
Given the described importance of the proposed workshop, the purpose of the workshop includes: (1) sharing Big Data project experiences among the participants, and identifying challenges in the design, implementation, deployment, and evolution of Big Data applications and services; (2) fostering a Big Data community of researchers and practitioners focused on applications and services, and (3) compiling an agenda for future research. All application domains are of interest in this workshop. In the workshop, we shall identify, debate about, and discuss solutions to, the barriers challenging the development, deployment, evolution, and success of Big Data Applications and Services.

View Workshop Detail

The Best of IBM Innovation: Advancements through Overcoming Technological Uncertainties (Speakers, in Evergreen)

Chair(s): Jerrold Landau , Perry Fuller
Theme: Data and Analytics
Room: Evergreen
Format: Speakers
Level: Beginner
Prereq: None necessary. Beginners and experienced people welcome.
Description: There are many factors that spur innovation in the field of technology. One such factor is governmental support. The Canadian government provides such support through the SR&ED (Scientific Research and Experimental Development) tax credit program. Not surprisingly, IBM has availed itself of this tax credit for many years. The fundamentals of SR&ED are based on three pillars: a) identifying a technological uncertainty, b) overcoming the uncertainty through a scientific experimental process, c) leading to an advancement in technology. The advancement is often defined as the acquisition of new knowledge in the domain. It should be noted that the advancement need not be incorporated into a product, and indeed need not necessarily be successful in the classical business definition of the term. One advances technology just as well by proving that something is not feasible than by proving something is feasible. This definition of advancements in technology, while nuanced and subject to governmental SR&ED program regulations, can be applied in a most general sense to many fields of scientific innovation. It has been noted that both the IBM CAS organization and IBM's participation in the SR&ED program promote innovation through an exploration of the proverbial 'bleeding edge' of technology. In this workshop, we will provide an overview of the technological criteria for participation in the SR&ED program, and present SR&ED as an indicator of technical vitality. This will be followed by presentations from several IBM Lab teams highlighting their SR&ED claims over the past several years. It is expected that the audience will come away with a new perspective on scientific innovation as applied to the field of software development.

View Workshop Detail

Large-Scale Multilevel Streaming Data Analytics (Speakers, in Primrose)

Chair(s): Farhana Zulkernine , Haruna Isah
Theme: Data and Analytics
Room: Primrose
Format: Speakers
Level: Beginner
Prereq:
Description: Motivation and Justification:

There is a monumental shift happening in how data powers organizational and business operations. This shift is about moving away from traditional batch data analytics to real-time and hybrid data analytics involving both static and continuous data to avoid delay in generating insights and storing massive amount of streaming data. A good number of analytics systems currently utilize stream processing without storing the data to quickly ingest, analyze and to correlate information as it arrives from thousands of real-time sources (devices, sensors, and applications). Such systems often provide real time dashboards and critical alerts, and therefore, are required to be fast, efficient, effective, scalable, and reliable.

In most cases stream processing is followed by batch processing for deeper analytical processing. Modern streaming analytic systems, therefore, try to unify batch and streaming analytics into a seamless data processing pipeline. A general architecture of a large-scale multilevel analytics system consists of (i) an ingestion mechanism at the front-end, (ii) streaming and batch data processing engines for data transformation, scoring, modelling of historical data, and real-time prediction, (iii) data storage units for persisting, indexing, searching, and knowledge management, (iv) resource management unit for the coordination of distributed compute and storage resources, and (v) visualization units to present results and knowledge for decision support.

Some of the deeper analytics of streaming data requires longer execution time and can choke the data processing pipeline. The stream plus batch analytics solves that problem. However, in our progression towards the Internet of Things (IoT), we will face serious computational and storage challenges in such an approach. Innovative solutions are needed to selectively store streaming data, enable near real time micro batch processing, and perform multi-level in-memory analytics.
Large-scale multilevel analytics on a unified platform is increasingly gaining attention in the industry as it can potentially enhance business and operational decision making. However, it faces the following challenges, a) implementing an efficient front-end for ingestion and integration of massive data streams across the globe, b) combining streaming and in-memory data analytics, c) developing a knowledge management strategy to store, manage and link big data and distributed knowledge, and d) other challenges including cluster management, knowledge representation, and visualization. The above challenges make the development of methods, algorithms, and infrastructures for multilevel streaming analytics a challenging but interesting research problem.

Goals and Outcomes:

This workshop aims to provide a forum for researchers and industry practitioners to discuss new ideas and share their experiences in the areas of streaming data analytics. Participants will present their work on topics including methods, models, algorithms, infrastructures, quality issues, applications, and open problems for large-scale streaming data analytics. The workshop can serve as a guide for organizations and individuals planning to implement a real-time data stream processing and multilevel data analytics framework.

Workshop Structure:

The half-day workshop will feature invited talks by experts, practitioners, researchers, and industry partners working on massive streaming analytics research. There will be a time for discussion after each presentation to instigate the audiences to share their comments and views and ask questions to the speaker.

View Workshop Detail

Wednesday AM workshops run from 08:30 to 10:45 AM

Building Microservices in a Cloud-Native World using Eclipse MicroProfile and Open Liberty (HandsOn, in Orchid)

Chair(s): Eveline Cai , Gilbert Kwan , YK Chang , Panagiotis Roubatsis
Theme: Cloud Computing
Room: Orchid
Format: HandsOn
Level: Beginner
Prereq:
Description: Eclipse MicroProfile is a set of open technologies to optimize enterprise Java for a microservices architecture. Open Liberty is the open source foundation of WebSphere Liberty, IBM's strategic Java application server for a cloud-native world. Come and learn how you can easily build microservices with Eclipse MicroProfile and Open Liberty. Get your hands dirty with various aspects of building cloud-native applications, from foundation for RESTful services to what you need to manage many microservices and what you need for reliable operations.

View Workshop Detail

Refine, restructure and make sense of data visually, using IBM Watson Studio (HandsOn, in Violet)

Chair(s): Serjik Dikaleh , Darren Pape , Dharmesh Mistry , Chris Felix , Ozair Sheikh
Theme: Cognitive Computing
Room: Violet
Format: HandsOn
Level: Beginner
Prereq: Must sign up for IBM Cloud account prior to the workshop (trial account/free tier is fine) - https://www.ibm.com/cloud/
Optional basic coding knowledge
Description: More than ever before, larger and more comprehensive data sets are being made publicly available on the internet. You can find data on all sorts of topic such as housing prices, sports data, wine reviews, weather, movies, TV shows, gun violence and anything you can think of. How does one make sense of all this data, and can you combine different data sets to get new insights for your needs?

In this workshop, we will begin by talking about where you can find open data sets and show some examples of how they have been used to gain insights. We will then take some sample data sets and explore it through IBM Watson Studio. We will further the workshop by creating visualizations of the data using both open source programming concepts and using tools available in IBM Watson Studio.

View Workshop Detail

2nd Workshop on DevOps and Software Analytics for Continuous Engineering and Improvement (Speakers, in Jasmine)

Chair(s): KONSTANTINOS KONTOGIANNIS , Chris Brealey , Alberto Giammaria , Brian Countryman , Marios-Stavros Grigoriou
Theme: Systems
Room: Jasmine
Format: Speakers
Level: Intermediate
Prereq: Knowledge on software development process and software engineering principles
Knowledge on DevOps tools and frameworks
Experience in software development and its life-cycle
Description: A key issue that emerges in the software engineering community is how to provide efficient DevOps tools and processes that facilitate continuous delivery and improvement, particularly in and for Cloud based environments where continuous delivery at speed with high quality can be crucial to business success.
This workshop aims to bring experts from industry and academia to discuss and debate the latest trends related to the design of frameworks that support DevOps practices of complex systems which are developed and evolved within a “Measure-Analyze-Assess-Act” loop. Such frameworks utilize software repositories, software analytics, process analytics, the quantification of technical debt as a failure risk predictor, and the system’s run-time behavior to dynamically assess deploy/no-deploy choices and achieve continuous deployment.

View Workshop Detail

Wednesday PM workshops run from 02:15 to 05:30 PM

IBM Security Guardium Analyzer Bootcamp (HandsOn, in Orchid)

Chair(s): Devan Shah , Larry Lindsay , Josue Diaz , Sagi Shechter , Andy Becher
Theme: Security
Room: Orchid
Format: HandsOn
Level: Beginner
Prereq: IBM ID account already created
Description: General Data Protection Regulation (GDPR) requires organizations to implement adequate controls to protect personal and sensitive personal.  A critical step in that journey is impact assessment; understanding where data (subject to GDPR) is located and how vulnerable it is.  In this session, we will introduce IBM Security Guardium Analyzer, a new SaaS offering, which intends to help organizations easily complete the impact assessment for their databases. You will receive hands-on experience on quickly utilizing the Guardium Analyzer solution to locate GDPR data and determining risk on existing databases.

View Workshop Detail

Deriving Client Insights in the Financial Sector (HandsOn, in Violet)

Chair(s): Diane Reynolds , DAVID DCOSTA , David Xie , Seacy Zhen
Theme: Data and Analytics
Room: Violet
Format: HandsOn
Level: Intermediate
Prereq: - solid knowledge of coding in Python
- exposure to wealth management / financial sector
- willingness to participate in team/group activities
Description: Join us in this hands-on workshop to create your own reports and dashboards to support a financial advisor in completing key elements of his/her day-to-day activities. We'll look at data requirements, experiment hands-on with the data, clean it, load it to a data-science-friendly environment, run some standard models and then extend those models in different ways. Finally, we'll bring together the results in a user-friendly way.

If you've wondered about how to operationalize your machine learning algorithms, want to get deeper into data science as a financial-sector participant or are interested in IBM's ecosystem for machine learning innovation, this is the right workshop for you!

View Workshop Detail

iCity - Big Data and Visualization Urban Transportation Strategies (Speakers, in Jasmine)

Chair(s): Sara Diamond
Theme: Data and Analytics
Room: Jasmine
Format: Speakers
Level: Beginner
Prereq: There are no prerequisites for the workshop.
Description: Providing efficient, cost-effective, sustainable transportation networks and services is a major challenge for cities around the world – not only for individual cities, but for connectivity between cities. High quality transportation services, notably well-designed transit hubs within comprehensive networks are fundamental prerequisites for effective cities and spur economic, social and cultural inclusion, development and growth. Transportation strategies must be at the heart of smart city strategies. The melding of machine learning, simulations, predictive analytics and design create capacity and connectivity that will help policy and makers gain insight into complex decision-making processes and support evidence-based decision making. Solving transportation and transit challenges requires integrating transdisciplinary knowledge, including computer science, engineering into city planning.

View Workshop Detail

Wednesday Full-Day workshops run from 08:30 to 10:45 AM, and then continue from 02:15 to 05:30 PM.

Practical Machine Learning with Python on DSX (HandsOn, in Conf Center 2)

Chair(s): Shaikh Quader , Mark Ryan , Eric Dong
Theme: Data and Analytics
Room: Conf Center 2
Format: HandsOn
Level: Beginner
Prereq: * Coding experience in any programming language
* Users MUST bring their own laptops for the hands-on exercise
* IBM Cloud id: https://console.bluemix.net/registration/
Description: In this FULL-DAY workshop, after a brief introduction to Machine Learning, we'll take the students through the hands-on exercise of building a Machine Learning model from the scratch. They will learn and code in different phases of Machine Learning pipeline, including acquiring, cleaning, and exploring data; building and evaluating ML model. Finally, we'll tell them how to build a discipline for continuous learning of ML and applying this to solving real problems.

View Workshop Detail

Introduction to the IBM Q experience and Quantum Computing (HandsOn, in Holly-Butternut)

Chair(s): Mehdi Bozzo-Rey , Robert Loredo
Theme: Other
Room: Holly-Butternut
Format: HandsOn
Level: Beginner
Prereq: It is assumed that participants will:
- bring their own laptops
- have the anaconda python distribution installed (www.anaconda.com)
- have an IBM Q experience account (quantumexperience.ng.bluemix.net)
Description: IBM's work in quantum computing started in the 1970s with the birth of quantum information theory and the first conference on Physics of Computation was co-hosted by IBM and MIT in 1981. The quantum foundations time, where quantum computing was the exclusive domain of scientists and theoreticians is past history. We are now in a quantum readiness phase where education, algorithm development and use case identification that may lead to demonstrations of quantum advantage are key.

In 2016 IBM made quantum computing capable devices available for the public at no cost in the cloud and then launched the IBM Q network in 2017. Programming a quantum computer is quite different from “classical programming” and is done with a circuit made of quantum gates that execute the quantum algorithm. For devices with a small number of qbits, a GUI driven interface can be used to build the circuit, making quantum programming available and viable for even high schoolers. For devices with more qubits, or more complex circuits a programming framework is needed.

In this hands-on workshop, we will review the basics of quantum computing, go through account creation on the IBM Q experience and basic use of the Composer (GUI driven interface), installation of the Open Source Qiskit and Qiskit AQUA frameworks, and how to execute simple quantum circuits on either a local quantum simulator or on a real quantum device that operates at a temperature colder than outer space.

View Workshop Detail

Compiler-Driven Performance Workshop (Speakers, in Conf Center 1)

Chair(s): Gennady Pekhimenko , Ettore Tiotto
Theme: Systems
Room: Conf Center 1
Format: Speakers
Level: Intermediate
Prereq:
Description: The workshop has a particular focus on (but not limited to):
Innovative compiler analysis, transformation, and optimization techniques
Languages, compilers, and optimization techniques for multicore processors and other parallel architectures
Compiling for streaming or heterogeneous hardware
Dynamic compilation for high-performance and real-time environments
Compilation, optimization, and analysis for dynamic languages
Compilation techniques for reducing power
Program safety
Whole system optimization and analysis
Tools and infrastructure for compiler research

View Workshop Detail

Distributed Ledgers and Blockchain: Concepts and Applications (Speakers, in Evergreen)

Chair(s): Asic Chen , Arno Jacobsen
Theme: Other
Room: Evergreen
Format: Speakers
Level: Beginner
Prereq: General computer science and computer engineering (information systems background as taught in Bachelor’s curricula)
Description: Blockchain has been, without doubt, one of the hottest topics in technology in recent years. As is in the case of most “buzzwords”, most people, even software professionals, have no more than a surface comprehension of the technology. We hope that through this workshop, we can demystify blockchain and distributed ledgers, giving the attendees a working understanding while expanding on use cases far beyond the most popular: cryptocurrency. During this workshop, we will provide a tutorial-style introduction to various distributed ledgers and blockchain technologies. We will be focusing on first principles and algorithms while identifying emerging blockchain use cases.

View Workshop Detail

2nd Workshop on Advances in Open Runtime Technology for Cloud Environments (Speakers, in Primrose)

Chair(s): Daryl Maier , Kenneth Kent
Theme: Cloud Computing
Room: Primrose
Format: Speakers
Level: Intermediate
Prereq:
Description: Modern language runtimes are complex, dynamic environments that involve a myriad of components that must work cooperatively to achieve the functional and performance requirements of a given language. Typical core runtime technologies include dynamic just-in-time compilers for performance, garbage collection for heap management, platform abstraction for ease of portability to different hardware and operating system environments, developer tooling for diagnosis and tuning of the various components, and interoperability between different language environments.

Cloud services such as IBM Cloud, Microsoft Azure, or Amazon Web Services (AWS) are increasingly becoming the environments where applications are developed and deployed, data is stored, and businesses are run. Many of the features that define a cloud (e.g., resiliency, elasticity, consistency, security) are realized through runtime technologies. Clouds are polyglot environments, and therefore advances in cloud development are directly driven by innovation in runtime technologies. However, cloud environments pose unique, often conflicting demands on runtime systems that are often less of a concern in isolated systems. Throughput performance (how fast is my application?), density (how many instances of my application can I run simultaneously in my provisioned environment?), startup performance (how quickly can I launch a new instance of my application?), and language interoperability (how can my JavaScript code efficiently call a function in a Python module?) are all important considerations that require innovation to solve effectively.

The goal of this workshop is to bring together research, industry, and development communities to share and discuss innovations, challenges, and research across a broad set of open-source runtime technologies (such as Eclipse OMR, LLVM, Eclipse OpenJ9, Node.js) for cloud environments. The focus on open technology solutions rather than proprietary is key as it allows for greater collaboration amongst individuals, communities, researchers, and companies through shared learning on common technology.

View Workshop Detail