Loading…
Ogyoku [clear filter]
Monday, October 26
 

9:30am JST

RSVP Required: Command Presence Workshop presented by the Women of OpenStack
Limited Capacity seats available

RSVP here: https://docs.google.com/spreadsheets/d/1fj7V0nLh7UBvj7I7htWLz63ygdk4vRDkU2DqB-t1TQo/edit?usp=sharing

Capacity: 30

Target audience: Technical Women who regularly attend technical meetings [Working Groups] and would like to develop and “command their presence” in high pressure meetings. 

Workshop Background 
The Command Presence Workshop (CPW) has been recognized by Anita Borg Institute and Harvard Business Review. It is intended to teach participants how to handle “senior-level high pressure meetings”. The majority of the workshop is a task-force convened to address a simulated, but very realistic crisis event. In the class, each participant assumes a role, and is given about 30minutes to prepare 1-2 slides. Once the class has prepared, the simulation begins. There is a role-play simulations. It is not uncommon to hear occasional “gasps” of disbelief from the participants. The workshop participants will thus experience rapid-fire questioning, interruption, and other situations that occur in senior-level meetings, but in a safe environment conducive to learning techniques for handling such situations.

Speakers
avatar for Dr. Malini Bhandaru

Dr. Malini Bhandaru

Architect, Intel
Malini Bhandaru is a Sr. Cloud Architect with the Open source Technology Center, Intel and has been involved with OpenStack for over three years. Her tenure at Intel spans work on cloud and security, fast encryption algorithms, and Xeon platform power and performance. Prior to Intel... Read More →
avatar for Ruchi Bhargava

Ruchi Bhargava

Director, Datacenter & Cloud Software Engineering, Intel Corporation
Ruchi Bhargava is the Director, Datacenter & Cloud Software Engineering with Intel's Open Source Technology Center. Prior to this, Ruchi was Intel IT's Hybrid Cloud product Owner responsible for building, deploying and running an enterprise Cloud for Intel. She has also held roles... Read More →


Monday October 26, 2015 9:30am - 12:30pm JST
Ogyoku
 
Tuesday, October 27
 

11:15am JST

Unlocking OpenStack for Service Providers
 Is OpenStack commercially viable for service providers?

Any service provider looking to develop its cloud solution business wants a service that can be brought to market quickly and cost effectively. It needs to provide differentiation, and to be able to scale as the service grows.

How to achieve that? Build or buy? or any combination?

In this session we will go through some of the challenges we faced when creating OpenStack based Cloud Service Providers in the early days and how we would do some things differently.

Speakers
OL

Omar Lara

Solutions Architect at Canonical
I am a SysAdmin that takes advantage of Open source Software and Python Programming to help performing all the vital tasks to reach the best Quality of Service for users introducing new paradigms with a very depth knowledge in Cloud Computing.Specialties:Cloud Computing, IaaS, Virtualization... Read More →
avatar for Arturo Suarez

Arturo Suarez

BootStack & Training Product Strategy, CANONICAL
I am the BootStack and Training Product Manager for Canonical. The managed hosted (or on-prem) cloud service offered by the leading OpenStack OS company. The service includes a unique combination of long pursued features within the industry: SLA driven, optional cloud control transfer... Read More →


Tuesday October 27, 2015 11:15am - 11:55am JST
Ogyoku

12:05pm JST

Building a Scalable Federated Hybrid Cloud
Amazon Web Services has had regions across the globe and Availability Zones (AZ) as standard offering for years. The tiering and choice of locations is best practice that OpenStack architects should embrace as they build a blueprint for geo-distributed OpenStack clouds. The notion of different regions and AZ allows smaller “blast zones” in the event of outages plus enables use cases such as active/standby clouds, tenant data replication and new tiered application architectures. All of these use cases are extremely attractive but if not undertaken correctly, a geo-distributed OpenStack endeavour can be unpredictable, costly and complex.

The cornerstone to a winning Geo-Distributed OpenStack Clouds architecture is the networking layer.  While there are existing network architectures that leverage existing routing protocols and tunnelling mechanisms, OpenStack offers an opportunity to rethink and implement a lightweight mechanism based on a radically simpler software based implementation.

In this session, learn how PLUMgrid is leveraging existing OpenStack compatible technologies and extending it to support a geo-distributed OpenStack cloud architecture. We will delve into technical implementation, physical infrastructure considerations and use case examples.

Speakers
avatar for Pere Monclus

Pere Monclus

CTO
Before co-founding PLUMgrid, Pere was a Distinguished Engineer at Cisco Systems in the Research and Advanced Development team, where he led innovation in the areas of cloud, security and converged infrastructure. Prior to that, he was responsible for the architecture and technology... Read More →
avatar for Sunny Rajagopalan

Sunny Rajagopalan

Principal Architect, PLUMgrid
Sunny is Principal Architect at PLUMgrid where he works on networking virtualization, cloud technologies, distributed platform and security. Prior to PLUMgrid, Sunny has worked as an Architect in the Networking CTO group at IBM, and before that he has held technical leadership positions... Read More →


Tuesday October 27, 2015 12:05pm - 12:45pm JST
Ogyoku

2:00pm JST

Empower Your Cloud Through Neutron Service Function Chaining
Today’s telecom services are driven by demand for cloud-based services. Customers want dynamic,
on-demand cloud services over any combination of virtual and physical networks. The most urgent problem service providers have today is automating the delivery of network services and accelerating cloud service rollouts.

Service Function Chaining provides end-to-end network service delivery automation that operates completely independent of whether the network is physical or virtual. It integrates with today’s network infrastructure to create a network service abstraction layer that isolates operations from tedious and diverse configurations at the network layer, making the service layer simple, generic, and programmable. Network service can be auto provisioned and deployed on both general COTS platforms and legacy network devices in your data center using service function chaining, in which a wide variety of service functions scattered over the newtork may be chained together in a flexible and agile manner to provide desired service treatment. Scale-out of these service functions to handle added load or scale-in to reduce the resource usage is an integral part of the service function chaining solution.

This presentation will talk about how you can integrate the service function chaining feature, which is being developed as part of OpenStack Neutron, into your Cloud Platform to auto provision differentiated Cloud services in an agile and flexible way, and how to turbo-charge your cloud using service chaining. The following topics will be covered:

1. Overview of OpenStack Neutron Service Function Chaining Solution
2. How to define the service chains in a simple, prescriptive manner to create cloud services tailored to the needs of individual customers
3. How to ensure vendor independence and be agnostic of underlying network technology
4. How to achieve scalability and elasticity (scale-out and scale-in) of service functions on your cloud platform
5. How to integrate container technologies, such as Docker, with Service Function Chaining

Speakers
avatar for Paul Carver

Paul Carver

Principal Member of Technical Staff, AT&T
Paul Carver is a Principal Member of Technical Staff at AT&T working on Software Defined Networking and Network Function Virtualization. His background includes traditional hardware networking with a wide variety of vendors in WAN and datacenter environments as well as software development... Read More →
avatar for Ralf Trezeciak

Ralf Trezeciak

Network Architect, Deutsche Telekom
avatar for Cathy Zhang

Cathy Zhang

Principal Architect, Huawei
Cathy has over 15 years of software design and development experience. She is currently a chief architect at Huawei’s USA Cloud Computing Lab. Her expertise includes Serverless Cloud Platform, Network Service and Virtualization, SDN, OpenStack, etc.. She is a key member of the Serverless... Read More →


Tuesday October 27, 2015 2:00pm - 2:40pm JST
Ogyoku

2:50pm JST

Multisite Openstack - Deep Dive
Managing multiple sites of OpenStack is a headache.  Each site is an individual silo, with its separate identity management, resources, networks, images, etc.

Tricircle project uses a top-level OpenStack to manage multiple bottom-level OpenStack instances with site-to-site connectivity, to provide the same overlay across sites.  This allows for a tenant to deploy VMs from the same virtual network on different OpenStack instances.

In this session, we will deep dive into the Tricircle project, covering aspects such as cross-site overlay network, image synchronization, VM migration, etc.

We will present our current design and roadmaps.

Speakers
avatar for Ayal Baron

Ayal Baron

Cloud computing CTO, Huawei
Ayal Baron is Cloud computing CTO in Huawei's ITPL and brings nearly 20 years of software development experience in the fields of virtualization, storage and networking to his role.Prior to joining Toga, he was Senior Engineering manager in Redhat leading cloud storage and before... Read More →
avatar for Pino de Candia

Pino de Candia

CTO, Chief Architect, Midokura
As CTO, Pino is responsible for Midokura’s technical innovation and evolution of its flagship technology MidoNet.Pino de Candia joined Midokura as a Software Engineer in 2010. He built the early versions of MidoNet, led the Network Controller team as engineering lead and the Architecture... Read More →
avatar for Eran Gampel

Eran Gampel

Chief Architect - Cloud and Open Source at Huawei
Eran has over 20 years of R&D and entrepreneurship experience in multiple fields, such as networking, SDN, virtualization, cloud, open source, and others. He is currently Chief Architect of Cloud and Open Source in Huawei, managing a research team of open source experts, developing... Read More →


Tuesday October 27, 2015 2:50pm - 3:30pm JST
Ogyoku

3:40pm JST

Managing Multi-hypervisor OpenStack Cloud with Single Virtual Network
Abstract: Based on the results of OpenStack user surveys, KVM is the de facto hypervisor used to underpin OpenStack clouds and Open vSwitch the most common network plug-in. However, as OpenStack matures to run production workloads and broadens its reach beyond the core users, it is inevitable that additional hypervisors, containers, bare metal workload will gain a foothold.  

The whole notion of supporting a multi-hypervisor OpenStack environment is need driven. To illustrate, imagine that some databases in a business are virtualized on ESXi while the corresponding web servers are virtualized using KVM. In addition to the diversity, it is highly likely all these hypervisors and Docker container will be running concurrently in the same OpenStack cluster. As a result, a different approach to interconnecting is required to provide common network substrate.

We’ll discuss how multi-hypervisor networking in OpenStack can be achieved. This session will cover the technology options, the benefits of each and dive into various use cases where a multi-hypervisor OpenStack environment is desirable.

Speakers
avatar for Dhiraj Sehgal

Dhiraj Sehgal

Product and Solution Marketing, PLUMgrid Inc, PLUMgrid
Dhiraj works in product and marketing organization of PLUMgrid. His focus has been customers, technologies and products and how do they interact with each other. He has wealth of experience in datacenter technologies ranging from compute, networking to storage.


Tuesday October 27, 2015 3:40pm - 4:20pm JST
Ogyoku

4:40pm JST

Is DefCore Picking OpenStack Winners and Losers? Answers in Interop 101
By definition, DefCore picks what is required for vendors to use the OpenStack trademark. Ideally, that allows workloads to interoperate between vendors.  A common baseline may help users but can impact vendor differentiation.

So does that mean we are picking winners?  Join us at this session and we'll give you the background and review the critical issues to answer that question.

What is DefCore?  With a growing number of OpenStack projects, we define what is the minimum set of OpenStack capabilities and components that must be enabled.  We go further and define the actual tests (from Tempest) that vendor clouds must pass.  

Why? DefCore is trying to define the minimum set of capabilities for INTEROPERABLE CLOUDS; it is not about an OpenStack distribution or release but trying to create a multi-vendor ecosystem.

After attending this session, you will know what DefCore is, how capabilities get defined, and how it is decided what is and what is not part of DefCore.

After we cover the basics, we'll bring you up to speed on the hard questions that DefCore is trying to address.

Speakers
avatar for Rob Hirschfeld

Rob Hirschfeld

CEO, RackN
Rob has innovated edge, cloud and infrastructure space for 20 years and has done everything from working with early ESX betas to serving four terms on the OpenStack Foundation Board and as an executive at Dell. He's also the host of the Cloud2030 podcast focused on cloud, industry... Read More →
avatar for Egle Sigler

Egle Sigler

Principal Architect, Rackspace
Egle Sigler is a Principal Architect on Private Cloud team at Rackspace, and an OpenStack Foundation Board member. As part of OpenStack Board, Egle is Co-Chair of DefCore committee. Egle is very passionate about promoting women in technology. She has served for two years on a governing... Read More →


Tuesday October 27, 2015 4:40pm - 5:20pm JST
Ogyoku

5:30pm JST

Hybrid Cloud - A Different Approach to Managing Multiple Clouds in a Single Pane
 The term hybrid cloud is often used to describe a “cloud burst” scenario – extending the organic datacenter to the cloud, either by moving specific applications/workloads that are suitable for cloud offloading, or by using more advanced solutions that provide additional resource capacity from the public cloud provider.

In this session, we will present a different approach to managing a “hyper cloud” – a cloud over clouds that is completely OpenStack-based, centrally managed, does not require any organic datacenter to begin with and which provides a single overlay network.

We will touch on the following aspects: central management, overlay network, image portability & synchronization and zero-configuration VM migration.

Speakers

Tuesday October 27, 2015 5:30pm - 6:10pm JST
Ogyoku
 
Wednesday, October 28
 

11:15am JST

The Media Talks Back to OpenStack
Ever wonder what the analysts and media who cover OpenStack *really* think?  This session features an all-star panel of pundits who have made names for themselves asking tough questions and writing content about OpenStack that gets noticed. We'll explore topics including:

- What are the nagging questions about OpenStack that you don't get satisfactory answers to?- Who is doing the best job right now of talking about their OpenStack game plan in clear, consistent and pointed language?
- What do you wish people in the OpenStack community would stop saying?

Join us and be prepared to talk about what's on your mind. It's a great opportunity to get direct answers from the folks who are usually asking the questions.  

Moderators
KP

Ken Pepple

Ken has over 20 years of experience in global technology companies. He has worked with many enterprises and service providers in both the United States and Asia to architect, develop and integrate new technologies into their organizations.Prior to co-founding Solinea, Ken was Cloud... Read More →

Speakers
MC

Matthew Cheung

Matthew Cheung is a Research Director with the Gartner Technology and Service Provider Research group, where he specializes in operating systems, security, virtualization and IT operations management software. He has more than 13 years of experience in the IT industry.
FL

Frederic Lardinois

Frederic has spent more than five years covering news and providing analysis about technology, the industry and consumer tech related to the Internet with potential to influence industry direction. At TechCrunch, his focus spans from emerging technologies and niche startups to major... Read More →
AP

Agatha Poon

Agatha Poon has been a market research professional for 12 years, and has been a regular speaker at numerous professional gatherings and industry events, including PTC, CommunicAsia, BrightTalk and Cloud Computing Expo. Agatha joined 451 Research in April 2010, bringing expertise... Read More →


Wednesday October 28, 2015 11:15am - 11:55am JST
Ogyoku

12:05pm JST

Auto Scaling Cloud Infrastructure and Applications
In this session, we will show you how to use OpenStack's Heat, Monasca, and Neutron, to autoscale your application! Autoscaling helps you maintain application reliability and intelligently scales your infrastructure while managing infrastructure costs. 

In the Liberty release you will be able to use performance and health metrics to trigger scaling policies in Heat. This powerful feature enables the following use cases and much more:

  • Dynamically remove VMs when load is low

  • Automatically scale up to larger VMs when load is high

  • Increase reliability and automatically recover from failed VMs

  • Automatically rebalance application traffic via the Neutron LBaaS


As part of this presentation the audience will learn more about how we have implemented autoscaling, some of the challenges faced during the implementation, and application autoscaling best practices. We will also demonstrate horizontal autoscaling a sample web application while rebalancing incoming traffic with 0 down time. 

Speakers
avatar for KANAGARAJ MANICKAM

KANAGARAJ MANICKAM

KANAGARAJ MANICKAM, Huawei Technologies India Pvt. Ltd.
Sr. Software Engineer @ HP. Expertise in HP Storage, Server and Data-center Management and automation. Core Reviewer @ OpenStack Heat Actively working in OpenStack Heat from Kilo release. In addition, having good developer and operator knowledge in Keystone, Nova, Cinder and Neutron... Read More →
avatar for Matt Young

Matt Young

Director of Product Management, Puppet


Wednesday October 28, 2015 12:05pm - 12:45pm JST
Ogyoku

2:00pm JST

Data Lake on OpenStack - Petabyte Scale!
Got lots of data that you want to make use of? Not so easy to set up an environment to do so and maintain it, eh? Symantec’s data lake is a large scale example of marrying OpenStack platform technologies with big data enabling technologies such as Hadoop, Hive, Storm, Kafka, Spark, etc. This talk will cover what Symantec has done to allow our various teams to easily leverage our many petabytes of security data to increase the protection of our customers against threats such as APTs, identity thieves, and malicious web sites.

Symantec leverages our OpenStack cloud to create multiple analytics clusters, ranging in size from multi-PB to just a few VMs. We use various OpenStack services through a CloudBreak plug-in. Some other technologies we use in setting up and operating these clusters include Ambari, Puppet, a home-grown synthetic transaction system, Zabbix, and Dasher.

Speakers
avatar for David T. Lin

David T. Lin

Senior Director, Cloud Platform Engineering, Symantec
Cloud Security


Wednesday October 28, 2015 2:00pm - 2:40pm JST
Ogyoku

2:50pm JST

Managing Microservices at Scale With OpenStack + Docker - Your Ideal Environment for DevOps
OpenStack gives you a non-proprietary and extensible cloud. Microservices and Docker allow for extensible app architecture, and a vendor-agnostic, scalable infrastructure. While Microservices simplify app deployments, they come with a price: because it is so fragmented, it is more difficult to track and manage all the independent, yet inter-connected, components of the app.

With a combination of Docker, OpenStack, and an end-to-end orchestration layer you can have a Microservices architecture, while supporting easy deployments across Build, QA, and Production environments, with a scalable, centrally managed Openstack infrastructure.

This seamless integration between the application and the infrastructure simplifies and accelerates your DevOps processes and software delivery pipeline, while maximizing compute resources.

Using real examples and a live demonstration (JIRA, Jenkins, Chef, Selenium), this talk would cover best practices and tips for enabling a robust, scalable and extensible DevOps infrastructure to support today’s modern app delivery – all the way from architecture, pipeline design, build, test, and deployment.

Speakers
avatar for Nikhil Vaze

Nikhil Vaze

Staff Software Engineer, Electric Cloud
Nikhil Vaze is a Staff Software Engineer on the Electric Cloud engineering team. He is a full stack engineer and loves to hack on things. Nikhil holds a Bachelor of Science in Computer Engineering and Master of Science in Security Informatics from Johns Hopkins University.


Wednesday October 28, 2015 2:50pm - 3:30pm JST
Ogyoku

3:40pm JST

Tenant Network Isolation for Bare Metal Deployments With Neutron
Ironic currently supports provisioning of  bare metal deployments on a flat network. While this may be acceptable in small or test deployment scenarios, it is not a desirable solution for larger deployments where multi-tenancy support is needed. Flat networks does not provide isolation of tenant traffic. Therefore, operators end up creating extensive infrastructure to provide isolation for tenants for their deployments. Ideally, Operators desire to utilize same tenant isolation for their bare metal deployments that is available for virtual machine deployments (i.e VLANs or VxLAN based isolated networks). 

We propose utilizing Neutron networking in Ironic for bare metal deployments in a similar manner as these networks are available to Nova for virtual deployments. This streamlines and simplifies the bare metal deployments while providing full multi-tenancy support. 

Ironic and Neutron teams have been working together to make this a reality. We will present the details and deep dive of this implementation in this session. 

Come and learn how you can take advatage of this framework for your deployment scenarios.

Speakers
avatar for Sukhdev Kapur

Sukhdev Kapur

SDN Engineering, Arista Networks
Sukhdev Kapur is part of SDN Engineering team at Arista Networks - pioneer of software driven cloud networking. He has been actively contributing to the development of Neutron. Sukhdev is a networking veteran with over 20 years experience in highly available distributed systems, cloud... Read More →
avatar for Jim Rollenhagen

Jim Rollenhagen

Software Developer, Okta
avatar for Devananda Van Der Veen

Devananda Van Der Veen

Bare Metal Cloud Architect, IBM Cloud / SoftLayer
Devananda is opinionated and passionate about using technology to improve humanity. He began working on OpenStack in 2012 and started the Ironic project a year later, adding bare metal provisioning to the growing cloud platform, and subsequently served on the OpenStack Technical Committee... Read More →


Wednesday October 28, 2015 3:40pm - 4:20pm JST
Ogyoku

4:40pm JST

Lessons Learned Using BOSH and OpenStack APIs to Deploy Large Distributed Systems
CloudFoundry (CF) is a Platform-as-a-Service (PaaS) that is designed to be agnostic to infrastructure-as-a-service (IaaS) clouds and application platforms. This means that CF can be deployed in many IaaS, e.g., AWS, OpenStack, SoftLayer, and deploy different applications of varied platforms, e.g., Ruby-on-Rails, Python-Django, Golang, PHP/Zen, and so on.

CF achieves it’s cross-cloud capabilities by defining a thin layer called Cloud Provider Interface (CPI) that is implemented for each targeted cloud and is used by a CF cloud tooling called BOSH to help create, manage, and maintain IaaS resources such as VMs, IPs, and storages.

One of the key targeted clouds in the CF community are those supporting OpenStack. Using an OpenStack CPI we should in theory easily deploy CF, which consists of at least 10s of VMs with sometimes dozens of running jobs (long running processes), onto OpenStack clouds.

While the results are achievable in theory, in practice, what we have seen is that the actual deployments and maintenance of CF installations in different OpenStack providers of the same version result in changes and differences that bleed into the CPI layer. This means that the same CPI needs to be changed even though we target two OpenStack clouds supporting the same API version!

In this talk we describe the CF BOSH CPI layer and the various reasons why in our experiences we are still not able to perfect cross-cloud deployments of large systems, such as CF, in different OpenStack providers and versions.

Speakers
avatar for Michael Maximilien

Michael Maximilien

Distinguished Engineer, IBM
My name is Michael Maximilien, better known as max or dr.max, and I am a currently a Distinguished Engineer with IBM. I am the leader for IBM’s Open Source team contributing to all things Serverless and Platform-as-a-Service (PaaS). I have worked at various divisions of IBM. At... Read More →
avatar for Zhou Xing

Zhou Xing

Software Engineer, IQiYi
Tom Xing, Graduated from Peking University in 2009 and joined IBM China Development Lab as a software engineer then. Tom now is working for IBM open source and open standards team and focusing on CloudFoundry project development. As an active open source contributor, Tom has great... Read More →
avatar for Hua Zhang

Hua Zhang

Advisory Software Engineer
My name is Hua Zhang (Edward). I graduated from computer science department of Tsinghua University and joined Open Standards Team of IBM China at 2009. As an active contributor of Openstack community and CloudFoundry community, I believe open source software can change the world... Read More →


Wednesday October 28, 2015 4:40pm - 5:20pm JST
Ogyoku

5:30pm JST

Software Factory: Continuous Integration/Continuous Delivery (CI/CD) on OpenStack
In this talk we'll give you an overview of a platform, called Software Factory, that we develop and use at Red Hat. It is an open source platform that is inspired by the OpenStack's development's workflow and embeds, among other tools, Gerrit, Zuul, and Jenkins. The platform can be easily installed on an OpenStack cloud thanks to Heat and can rely on OpenStack to perform CI/CD of your applications.

In this session, you will learn how to:

- Deploy a CI/CD platform (Software Factory) on OpenStack
- Manage your CI/CD workflow thanks to Zuul
- Manage your slave nodes
- How to export CI/CD jobs's logs on Swift

Speakers
avatar for Fabien Boucher

Fabien Boucher

Senior Engineer, Red Hat
My team within Red Hat focuses on developing and improving Opendev's CI/CD toolbox. We aim to provide access to this toolbox to other dev teams via a CentOS based Linux distribution dedicated to software development called Software Factory ( https://softwarefactory-project.io ). I... Read More →
MH

Matthieu Huin

Senior Software Engineer, Red Hat
My team within Red Hat focuses on developing and improving Opendev's CI/CD toolbox. We aim to provide access to this toolbox to other dev teams via a CentOS based Linux distribution dedicated to software development called Software Factory ( https://softwarefactory-project.io ). I... Read More →


Wednesday October 28, 2015 5:30pm - 6:10pm JST
Ogyoku
 
Thursday, October 29
 

9:00am JST

Storlets: Making Swift More Software Defined Than Ever
The storlet framework enables running user-defined functions, such as transformations and filtering of data as it is uploaded or downloaded to/from an object store.  We have integrated the storlet framework with Swift using the standard 'middleware way'. Unlike conventional Swift middleware though, storlets provide a framework to run a dynamically loaded computation on the Swift data path, where the computation executes either on the object or the proxy nodes inside a Docker container.

Calling for participation, we are releasing an initial reference implementation of the storlet framework, which uses Docker containers, as a Stackforge project.  In addition, the related Swift middleware has been submitted to the Swift community for a review.

In this talk we will review the project, detail the design of the storlet framework, show use cases for analytics and media, and outline the future plans for Storlets, including potential integration with container management frameworks like Magnum as well as other integration options.

Speakers
avatar for Paul Luse

Paul Luse

Principal Engineer, Intel Corporation
Paul is a Principal Engineer and Software Development Lead working the Storage Group at Intel and is primarily focused on Cloud Storage Software. He has been working in storage-related technologies for most of his 20+ year career at Intel. Recently, Paul played a key role in the development... Read More →
avatar for Eran Rom

Eran Rom

Research Staff Member, IBM
Eran Rom is a researcher in the IBM Haifa research lab focused on systems and storage. In recent years Eran has been mostly involved with object stores doing architecture research and development projects centered around object stores.
HR

Hamdi Roumani

IBM Cloud lab Developer


Thursday October 29, 2015 9:00am - 9:40am JST
Ogyoku

9:50am JST

MidoNet 101 - Open Source Distributed Overlay Networking for Neutron
Since open sourcing MidoNet at the OpenStack Paris summit, many production deployments of OpenStack have selected MidoNet for their networking.  Over 20 companies are now involved with the open source project, and it's quickly growing in popularity.

In this session, I'll cover an introduction to MidoNet, an open source network virtualization overlay (NVO) plugin for Neutron. We'll do a full overview of the architecture, features, and give a demo showing off some of the advanced functionality MidoNet can offer, like distributed load balancing, and firewalls.  You'll also get an overview on how to get started deploying MidoNet in as little as 20 minutes.

By the time you leave this session, you'll have a jump start education on one of the coolest Neutron plugins around.

Speakers
avatar for Adam Johnson

Adam Johnson

VP of Business, Midokura
Adam runs sales, marketing and alliances for Midokura, where he applies his deep experience using open source software to address customer challenges across diverse industries. Previously Adam was founder and COO of Genkii, and consulted as a software developer to social media and... Read More →


Thursday October 29, 2015 9:50am - 10:30am JST
Ogyoku

11:00am JST

Ironic Towards Truly Open and Reliable, Eventually for Mission Critical
Bare metal provisioning is inevitable for cloud. Especially in multi-vendor large environment, it is a daunting task which is difficult to be fully automated because failure could happen easily in many ways.

This session will present new Ironic features we contributed in Kilo and Liberty, and is going to contribute in M and later based on the vision that we propose "OpenStack for mission critical platform".

Fujitsu sustains many mission critical systems of social infrastructure, such as baking, stack exchange, factory automation, and government agency system.

Based on those experiences, we believe that the most important customer values are:

  • Truly open, no vendor lock-in system to provide customer with freedom to switch any vendor anytime

  • Reliable, robust, highly available system to operate customer's business continuously

  • Responsive, responsible, competent support to resolve customer's incident quickly and accurately.


As the first step towards the final goal, we contributed the following features:

  • Virtual Media Deployment for large scale multi-vendor environment

  • Bare metal Graceful Shutdown for better maintenance

  • NMI dump for better support

  • Ironic Network Neutron SG/FW packet logging feature


As the next step, we plan to contribute:

  • Ironic-Nova Integration for unified operation

  • Ironic-Neutron Integration for multi-tenant support

  • Ironic-Cinder Integration for N+1 redundancy support

  • OpenStack logging improvement


This presentation will also show Ironic demo of implemented feature above.

Speakers
NT

Naohiro Tamura

Professional Engineer, Fujitsu Limited
Professional Engineer, Fujitsu Limited Currently he's working on the project FaaS Shell in Serverless Computing. https://github.com/NaohiroTamura/faasshell Previously he worked on the project OpenStack Ironic, and was a speaker at OpenStack Summit Tokyo 2015.



Thursday October 29, 2015 11:00am - 11:40am JST
Ogyoku

11:50am JST

OpenStack Nova Project Update
This presentation starts with a brief introduction to OpenStack's Nova project, including a description of Nova's mission and scope.

Then we will take a whistle stop tour at some of the big things the Nova project has been working on during Liberty. Nova is currently doing a lot of architectural evolution work. Learn about how Nova is evolving its public API. Discover what Nova is doing in its drive towards zero downtime upgrades. Learn how Cells v2 is likely to enhance every Nova deployment. Find out about the Feature Classification effort that is happening, including looking its work to describe how some technology choices can limit the features that are available to you, and our push to try and plug those gaps.

We will finish with some pointers on how to get more involved with the Nova project, and how to find out more about what has been happening during Liberty.

Speakers
avatar for John Garbutt

John Garbutt

Principal Engineer, Rackspace
John is currently a Principal Engineer at Rackspace, Nova PTL for the Liberty and Mitaka releases, and has been involved with OpenStack as a Software Developer since late 2010. He started with Citrix's Project Olympus private cloud packaging of OpenStack, and soon after working upstream... Read More →


Thursday October 29, 2015 11:50am - 12:30pm JST
Ogyoku

1:50pm JST

The Life and Times of an OpenStack Virtual Machine Instance
What exactly happens when you click Launch Instance in OpenStack’s Dashboard?

The basics are pretty well understood, but how about if we look a level deeper than the simple, abstract narrative? What are the technologies involved below OpenStack? How does OpenStack coordinate those technologies to get you a running VM that you can SSH into?

In this session, Mark will cover as much of much of the high-level and low-level details of the story of your “Launch Instance” request right up until the time that you get a shell on your VM. By the end of the session, even the most seasoned expert will hopefully have learned some surprising facts and be eager to learn more about some obscure detail of how this crazy cloud thing works!

Speakers
avatar for Mark McLoughlin

Mark McLoughlin

OpenStack Technical Director, Red Hat, 1980
Mark McLoughlin is Technical Director for OpenStack at Red Hat and has spent over a decade contributing to and leading open source projects like GNOME, Fedora, KVM, qemu, libvirt, oVirt and, of course, OpenStack. Mark is a member of the OpenStack Foundation board of directors, and... Read More →


Thursday October 29, 2015 1:50pm - 2:30pm JST
Ogyoku

2:40pm JST

Distributed Health Checking for Compute Node High Availability
Compute node high availability means when hardware or network fails, or the host operating system crashes, the node should be fenced and shut down, and instances on the node get relocated and rebooted on other compute nodes. In a vanilla OpenStack deployment, it demands the tenant workloads provide falt-tolerance and failover abilities. While this assumption is true for modern clustered applications, there are still many IT solutions in traditional industries relying on compute node high availability. This makes a barrier of OpenStack deployment in traditional enterprise IT.

In small deployment, it's tempted to setup a monitoring service for compute nodes, and call Nova host-evacuate when a compute node fails. However the monitoring service itself becomes the single point of failure and hot point. There are also proposals or maybe implementations to use ZooKeeper and Pacemaker (with Pacemaker-remote) because they provide heartbeat and membership service. The basic idea is that we have compute nodes register as ephemeral znode, and ZooKeeper maintains a heartbeat. We can also run Pacemaker-remote on the compute node to achieve similar effect.

The problem is that the heartbeat usually runs on the OpenStack management network. However if a host has good storage network connectivity but just failed management network connectivity, we should not consider it failed and perform fencing and evacuation. Because failed management network connectivity just means we can not boot new instance on the server, and it does not affect the running instances, so evacuation will cause unnecessary downtime for the tenant workloads. On the other hand, if the management network is good for a host, but storage network fails, we should fence and evacuate the host. The ZooKeeper and Pacemaker-remote type of solution also suffers from the scalability problem, because the heartbeats happens between a few ZooKeeper/Pacemaker server nodes and many compute nodes.

Hence we propose a distributed health checking mechanism for compute nodes. It can deal with compute node power failure, host os crash, memory going bad, disk failure, interrupt of management/storage/tunnel network and so forth.

We use Gossip protocol for distributed heartbeat checking. The Gossip implementation comes from the consul project (consul.io). The main idea is to run an agent on compute node, and have them probe each other. The agent on the compute node can also check and monitor many types of things like OpenStack services and hardware status.

We run distributed heartbeat checking on all the OpenStack logical networks, usually management, storage and tunnel network, and report the network connectivity and other monitored status to the controller node, and let the controller node decide if we should fence and evacuate the node based on all these information. We present and discuss an example decision matrix. It's also possible to create a plugin for Ceilometer to report the data and events gathered from the distributed health checking, so admin can register alarms and add handlers, and decide what to do in a highly flexible way.

We also propose a fence mechanism based on custom Gossip query to complement IPMI remote powering off. In case of host power failure, it's not possible to distinguish IPMI network failure and actual power failure, the symptoms are the same. If we fail to ensure the power state of the failed host via IPMI, we send a custom Gossip query to the targeted node. Upon receiving the query, the target node sends ack and stops feeding the hardware watchdog and have the watchdog shutdown the host. If the node does not receive Gossip heartbeat from all the other nodes, it should fence itself and shutdown the host. In these way, the fence request either reaches the target host, or the host fences itself in case of network connectivity failure, or the host actually experienced power failure. From the perspective of controller, given a reasonable time, it can be sure that the failed host has been powered off, thus it will be safe to perform host evacuate.

Speakers
avatar for Liu Jun Wei

Liu Jun Wei

No.78, Keling Road, Suzhou Science & Technology Tower, Hi-Tech Area, Suzhou City, Jiangsu Province, ChinaSuzhou Science, China Mobile (Suzhou) Software Technology Co., Ltd.
Liu Junwei, master, graduated from University of Science and Technology of China, Suzhou, China Mobile Development Center R&D Director of China Mobile suzhou Development Center.
avatar for Alex Xu

Alex Xu

Software Developer
Alex Xu has joined into Openstack from Grizzly, and he is working for Openstack Neutron and Nova. He is active contributor in the Nova currently.
avatar for Zheng Sheng Zhou

Zheng Sheng Zhou

Software Developer
Zhengsheng Zhou is a software developer in AWcloud, now is responsible for CI/CD development.


Thursday October 29, 2015 2:40pm - 3:20pm JST
Ogyoku

3:30pm JST

Debugging the Virtualization Layer (libvirt and QEMU) in OpenStack
Virtualization drivers (e.g. libvirt, QEMU/KVM) are the core part of OpenStack Compute layer. An OpenStack environment is challenging to debug as is -- more so when multiple Compute nodes and thereby multiple libvirt daemons and QEMU instances are involved. A good grasp of Virtualization debugging mechanisms is vital for effective root cause analysis. To that end, libvirt and QEMU provide a rich set of debugging controls that allow us to query (or modify) the state of virtual machines in distress.

This talk focuses on providing an in-depth view of aforementioned techniques. Topics include: debugging Nova Compute process crashes; gathering specific patterns from libvirt log filters, libvirt environment variables, and systemd journal fields; live querying the VM (and QEMU) state through `virsh` and QEMU Machine Protocol (QMP) commands; tuning the libvirt daemon logging; monitoring events emitted by QEMU, etc.

Audience would include OpenStack infrastructure operators, Virtualization (libvirt/QEMU/KVM) administrators, developers, tinkerers, or any one interested in understanding the Virtualization layer in OpenStack to help equip yourself with better debugging techniques.

Speakers
avatar for Kashyap Chamarthy

Kashyap Chamarthy

Senior Software Engineer, Red Hat
Kashyap Chamarthy works at Red Hat, as part of OpenStack Infrastructure engineering group, focusing his contributions on interactions between OpenStack and its underlying Virtualization components (libvirt, QEMU, KVM). In the past, he's presented and participated in the past four... Read More →


Thursday October 29, 2015 3:30pm - 4:10pm JST
Ogyoku

4:30pm JST

Dude, This Isn't Where I Parked My Instance!?
OpenStack Compute provides a number of facilities for moving instances around, but it's not always immediately obvious how they differ from each other. In this session learn the differences between each of the available options including evacuations, cold migrations, and live migrations as well as the internal mechanics of each including some of the ways they can differ when using different hypervisor backends. You will also learn about the pre-requisites for enabling each method and the optimal configurations for ensuring the right combination of security and performance.

Speakers
avatar for Steve Gordon

Steve Gordon

Principal Product Manager, Red Hat
Geographically displaced Australian. Focused on building infrastructure solutions for compute use cases using a spectrum of virtualization, containerization, and bare-metal provisioning technologies. Stephen is currently a Principal Product Manager at Red Hat based in Toronto, Canada... Read More →


Thursday October 29, 2015 4:30pm - 5:10pm JST
Ogyoku
 


Filter sessions
Apply filters to sessions.