How can we make log messages and the log library both flexible enough and standard enough that the library meets the needs of projects without imposing project specific formats into the log stream.
Progress has been made, but let's put together the plan to keep this moving forward for Mitaka.
During the Liberty cycle we introduced a number of governance tags that can be applied to project teams and deliverables, in an effort to provide information about what we produce in the big tent and help our users navigate that ever-growing set of projects.
This session will look back at the tags we introduced, and will brainstorm the next step. The 'next-tags' workgroup at the Technical Committee already has a few things in the pipe (tags to describe which projects have a horizon panel, or heat resources, or devstack plugins, tags to describe the QA being done...), but what are we missing ?
* (Very) short status of work done in Liberty? * DevStack: install clients and servers with Python 3 and run functional tests with Python 3 * Wiki: https://wiki.openstack.org/wiki/Python3 * Define goals for the Mitaka release. Example: Functional tests of Neutron and Heat must run on Python 2 and Python 3 on the gates.
Moderator: Victor Stinner Etherpad: https://etherpad.openstack.org/p/mitaka-cross-project-python-3
Tuesday October 27, 2015 2:50pm - 3:30pm JST
Amethyst room
Over the last cycles we realized that the OpenStack Way, our common culture and reference points, was no longer naturally transmitted to newcomers via oral tradition. To start solving that problem, during the Liberty cycle the Technical Committee formed a workgroup to document our common culture. The result is the Project Team Guide, which can be found at:
http://docs.openstack.org/project-team-guide/
What's the next step there ? Where can we improve that ? Any new chapters to suggest ? Should we further discontinue the wiki ? How to improve publicity around that guide ?
We also started work to make keeping up with what's happening in OpenStack development easier, including summaries of important threads from openstack-dev. In this session we'll brainstorm how to further improve on that.
Multinode testing is something we have all wanted for a long time. We have had it for a while now but its applied inconsistently. We should figure out where this fits in our testing setup. Do we replace single node default tests with multinode tests? Do we use this only for testing special things like DVR and live migration?
The test env can be complicated and adding more nodes raises the bar on reliability requirements. There is a lot to consider here and likely need broad input to determine what the future is with multinode testing.
It would be helpful to operators of OpenStack clouds if some set of configuration for OpenStack services were reconfigurable on the fly.
At a minimum, we could start with log level, allowing operators to move from INFO to DEBUG in response to issues, then back to INFO once the data needed has been acquired. Ideally, services would notice config file changes and manage themselves entirely without requiring a restart.
Note that this might benefit from sectioning the config file into "dynamic" and "require restart" to help operators know what setting can be changed on the fly. Note that this might benefit from sectioning the config file into "dynamic" and "require restart" to help operators know what setting can be changed on the fly.
We have split alarming function to new service "Aodh" successfully, let's check further development plans for alarimg. (cross project topic) - Reify Aodh Independence (aodh internal topics) - aodh client or openstack client? - (potential) new evaluation for combination alarms? - check work items; event-alarm/multi-worker, event-alarm/rbac (event-alarm only?), quota and tempest/rally support (to identify performance problems)
without a UI of some kind ceilometer can seem rather opaque. If the integration in horizon was both more interesting and more performant that would be helfpul. What can we do to improve it? - horizon attempts to show meters (and slowly is trying to show events) -- a lot of the feedback in survey is that it's unusable. - how to visualise events - how to leverage existing gnocchi+graphana integration - there are other consumers such as cloudkitty which uses ceilometer distinctly for chargeback, how do we create an interface for them?
The purpose of this fishbowl session is to define priorities for Kolla Mitaka development. We will brainstorm the things we want to accomplish as a community and vote on them within the Etherpad at the conclusion of the brainstorming session. Our most critical objective will be to continue to deploy more of the big tent. Half of this session will be spent prioritizing which services via vote which will result in the services being scheduled into m1/m2/m3.
The purpose of this 40 minute session is to brainstorm ideas representing gaps in Kolla's implementation that introduce pain points for Operators to use Kolla, based upon Operator input. The general Operator community is highly encouraged to attend this session such that their gaps can be recorded and prioritized. The last seven minutes of this session will be spent flat-rank voting live on Etherpad to help the community understand the prioritization of the brainstorm design session output.
This 40 minute fishbowl session will discuss technical approaches to upgrading Kolla from Liberty to Mitaka. Specifically we need to be able to upgrade services which may have database migrations and specially upgrade infrastructure services such as mariadb and rabbitmq.
What problem is this trying to solve?: Way back in Havana, Glance introduced "property protections". This feature was requested by deployers, who wanted the ability to put licensing information on images (which would be put onto instances) and have this info also be propagated to snapshots by Nova, so that instances created from snapshots would also receive this information. To make this work for licensing/billing info, of course, the image owner must not be able to modify such properties. The solution in Glance allows control over properties to be expressed in the policy.json file, and allows the properties that are protected to specified by regex. Thus it is extremely configurable, which is both a blessing and a curse. We left it up to deployers to communicate to their end-users how property protections work in their cloud. However, other OpenStack projects that create images (e.g., Nova, Cinder) currently require some crazy configuration/deployment gyrations to respect property protecions. Additionally, projects that allow image property creation/updates (e.g., Horizon) would like better knowledge of what the protections are in order to provide a good user experience. And it also affects projects that read image records (e.g., Searchlight) because whether an end-user should see a property or not depends on how the property is protected. There's a spec up in Nova [0] that proposes to fix this for Nova by introducing a new configuration option, "inheritable_admin_image_properties", that would specify a list of image properties for which Nova would use admin credentials to put on an image. What I'd like to do in this session is to find out from some other projects how they interact with Glance with respect to property protections and how we can improve this process. For example, though the fix mentioned above would work for Nova, it would be nice if we could figure out a secure way to update "admin-only" properties in specific contexts without requiring use of admin credentials. Additionally, maybe we can leverage metadefs to get the info about property protections out to Horizon and Searchlight. Even if we can't solve this in code during Mitaka, it would be worth gathering some best practices to communicate to deployers. [0] https://review.openstack.org/#/c/206431/ Discussion: https://etherpad.openstack.org/p/mitaka-glance-xp-property-protections-support
Thursday October 29, 2015 9:00am - 9:40am JST
Amethyst room
* What new libraries will be useful to the OpenStack community - apiclient - notifier - fairy-slipper project overlap? - anything related to support python3 effort? - anything else we can extract from oslo-incubator? * Review feedback from survey
Thursday October 29, 2015 9:50am - 10:30am JST
Amethyst room
2 major topics will be discussed during this track:
* Testing: what's next? In this session, we need to gather feedback from the group and understand what needs to be tested. More scenarios? Multi-node? * Documentation: discuss about Puppet OpenStack, Puppet in general and Ruby best practices. See if we need to document that and where exactly. Also some content about how to use the module in real environment: how can we help newcomers to install OpenStack with Puppet? Should we try to contribute to official OpenStack doc?