Bestand wählen
Merken

Chef Automate: Visibility Feature, Q&A Panel - July 13, 2016

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
OK we'll get started welcome everyone I'm this is a
Q & a session focused on shuffle make fixing on the visibility unsigned apart and then the next session of about 3 will be on the what side so I'd like to introduce you to some team i'm SUCH Attaché artificial and introduce the others thanks because some high I'm touch due and the product manager for our visibility I'm an analytics platform the what the FIL but hi I'm Tasha true and I am the products manager for our visibility of on analytics platform I as you all know we launched that yesterday's part of shove automated and yet this is
tom hi I'm Tom I I was the lead engineer on the visibility side of the the automated product and this is OK and chris I'm most responsible for the compliance program here and so on back to test and over here we have Mark who is head of UNIX hi so are basically you know so basically today we are briefly going to go over the projects and just as because I don't have many of you have seen much of it so we're going to quickly show you of the projects are some marks and stuff for building right now that we anticipate having included in the product of the next month which we think is cool which is our full compliance data integration on and then we are totally open for questions or whatever you all interested in talking more about the public keep it pretty succinct so and the of for so what I'm showing here our error is actually a series of marks the emulate the actual experience of the products and but what this has in it the the procurators news where we're going with so in the next month or so and which specifically is really around compliance so if you saw the demo of yesterday and then you know that essentially what this is is a with the the screens organ be looking at our screens allow you to see sort of the state of your fleet in your notes where things are passing or failing or a missing or being skipped and just the 1st thing I think a point out is that the way that data is getting in right now and you're allowed to sort of look by all chef servers or select individual ships original globally filter where your seeing at the Century Towards as well and if if you come down here you'll see this list of which is showing your nodes and the status of being convergence this radial is showing you the proportion of success and failure you filter around a based upon the use of our net elements here so essentially if you want to see all of your fail nodes you click here and you builders scroll through this list and see them and there's also a strand which displays a few things so 1 what you're seeing is the sort of our our our for the past 12 hours what's converging and failing but you're also seeing based upon this right here and where cookbooks were uploaded or changes were delivered through workflow and that's pretty much what we got to show you can also sort of filter by environment role but I think probably the most interesting features are up here in this area right here because we've sort develop the query language to allow you to the together more complicated queries based upon let's see what we have in their resources attibutes node names recipes cookbooks that's it for now right the plan is that that this would be open so we can be able to add to it the queries can become more and more robust over time and so that's essentially what this is about up here there's also sorry there's also the and a few other things so the ability to a gun executed the save searches and also share them so for instance imagine you put together query that then and you filter by environment or role and only filled nodes and you want to share exactly that with a colleague so allow you to do is sort of click appear clickers and essentially that all equal copied your clipboard so you can share my room form you would like like for free e-mail or chat or whatever you can also save a search so again the later that's what's going on here the other part of this is compliant so this is a similar dashboard 0 you solve for this in the context your nodes in the context of a compliance state so again you can sort of see here all combines nodes here so the scoring so for instance you have a series of checks and that are critical you can sort of these commenters excetera and you can also again view collides profiles were uploaded or changes were delivered and I then click to view details of the and essentially this is is showing your run history over here here you can filter on and you can click through to see the individual runs and see a recalling like yeah exactly and beautiful logs and batteries and you
can filter on these resources to see what the problem was to be here for that and that's what we have for now we're moving towards building compliance into this as well In this for having next cool and just from an integration perspective basically
right now we can collect data from all of your chef servers shove clients and shove solos and we gather data from the workflow aspects
on so that's how you're seeing when changes are delivered through workflow on we do plan to expand on those visualizations in the next quarter to arm and we also integrated with Inspector arm and so the data integrations are in the back end and were working on delivering the front and visualizations that marked a showed on ends of another aspect that we have is if you already have I de-spreading solution that you're really happy with and you just want all the data that were gathering then you can export directly to that using a very simple thing they were calling a data tap which has which is an IQ of events that if you grab i you can export to another so system the I do this wanna go over anything would the cool but does anyone have any questions off the top the yes screenshot of the 1 on the day and the some of the where you say the truth that all that's an excellent question and sing had that before injured of compliance as well and I have a that's especially we're we're trying to achieve so visibility is collecting the reporters and storing it also about history shows have compliance is already good and showing you the current state but the history feature was not also so this is like something that really looking forward to bring it and so in in specific this is also coming down to how do we do PDF reported suddenly share that and that is something that the pressure that you have to consider properly and the challenge is no that normally you need to adapt the true corporate needs so you need to have a template for that so that it sounds easy but it's not so and that's it so as to make it right and to make it adaptable to a specific need to make it customizable and this can be that the try to find out how to do it properly so reach out to other cells team so that we can find out how we can deliver that I just a quick show of hands up for my own information I feel in here is currently using chef compliance I give and many who is currently using Chef server both vocal arm and then anyone I use chef solo slash 0 yeah of course often I sewed yeah I mean basically what we have in production today is very focused on the ship server ships solos arm and clients Aidid integrations and then the complaints piece we anticipate will have out probably in the beginning of our guests on but for now I basically what you do is uh set up on a ship automate and then very easy system-level integration and we start collecting data from all the endpoints and displaying them the way you see here I guess the this in the the you there but then so the data collection mechanism is on automate there's a data collector and point and all in it takes data being sent to it from your other product so for example in the chef-client there is a data collector some data collector configuration that you can configure and you get like a token in an point and it will send the nest their work with the data to that end point the same with the chef server so automate isn't grabbing data from things it's been sensed data by things so your shove service in data to all of automated at all correct yes so you can you can kind of do whatever networking magic you want to in the middle it's just a RESTful API that all the data is coming in on this the all this the but so beyond the desert there's a fair for 3 hour 8 RESTful API on automated that you send the data to and and then a goes into the back end so that it the the rest of the book and not yet I you know it's so we anticipate that we will be adding that very soon nominal especially at the at tribute data people love seeing all the attributes in the system and like I think that they're going into it for about a week and then they're gonna realize the only really need 3 of them on and they like to save a lot on storage so we anticipate adding that functionality pretty pretty soon and say that this thing here the the saved searches of artist 1st saved to the backend database so it's not cookie so it spits connected with you user account yes and that part the the the and as an export yeah yes that's that the Data tab allows you to export the data to any 3rd party de-spreading tools are that you want to yeah I in that the just for the that is so that is in the roadmap on ends probably cue for our target yes so the kind of that this you I to be here and they said of something that is still in use in my and here is from the unit and what so this In this final it workshop the that the like this so this interface is primarily geared toward after practitioner-driven engineers per so for instance on a director of operations has a fleet of nodes that they have to monitor constantly and that person and has a team that has to fix the problems that arise with that was so this is primarily around where my fires and then what is the character of the fire for it at in the series of individual it's and so this may not speak you it's primarily inaccurate words the application and the 1 the it could be both in that in that sense it could be both women waterways like it's geared somewhat more toward people or managing the is the use of the word and there yeah it's currently on prior I
we have made it so In at way out of the gate we do allow you to use on Elastic as a service if you don't wanna have ElasticSearch hosted in your at the data center and we are looking at making it very cloud friendly arm as part of our summer key operational deliverables I it's not part of the hosted offering currently arm and we are looking at how to make it so that more of a cloud 1st offering so the 1 on primi can have a on prom everything's on Prem but then also having a hybrid model for people who wanna use a lot of services and 0 1 and are directly managed databases themselves and then having a fully automated solution in the cloud for people who want that I of my time Langford the 0 it is but by the end of the year in yes this and sure the the so as it as of right now the access control is observer everyone has observer privileges there's really no right capabilities with this particular interface on in the future we've discussed but I don't know specifics on timeline around being able to limit certain views a certain information to different roles that as of right now there's no role based access it was there the same and this all right right right now you can see all the data you can see at all the data that's available in the system not just your their organizations that you belong to right now the and I yes the the with the authentication right now is handled through the what was the delivery of education which is but is now and just the automate authentication which can sync up with seminal down and and all that stuff compensated for their this universe doesn't have its own role based access er user accounts it's the same automate has its user base and that those users get organized in whatever structures that structure the organized in at any given time and right now you get to see all of it and it was quite all of this and it has a lot yes that is the default here now yeah but if you want you can save it within using either searches a user preferences if you don't wanna see all of that you can drill down and will save those preferences the bond the that finale until we have our back that would be the only way to do it I guess I'm kind a curious arm on understanding a little more about that use case to I so it it sounds like there is some need for only people who were part of an organization should be able to see data from that organization on the shelf server thank you and you know the the the the the the set up of the world and that cool yeah that's good to know we've been playing with the idea of are back but because visibility is up to read only eyes system whereas the previous products have been right access that people were less so worried about it and if we talk to the what yeah I can definitely see that as sort of chipping away at the value of having everything in 1 place if you have a role requirements that keep that from actually useful on without them that's cool In the back so the there is I like we've talked about policy files but I direct integration with that isn't on the road map right now just because not enough people told us for using them for it to be at the top of our hit list for an integration In of so and I think of this in these God no so the the the solo agent itself knows how to send data to automate like in the chef-client binary itself so there's no additional agent that you need to install to send the data to automate just clients that but the big executable the binary just knows how to do it right by article out of the box as a link 1211 something was you it doesn't hit you have to do you have to know the there's like 0 I can figure out the client side but but declined other B there is some gift pointed to the automate point are for the data collector and give it up security token to light up so that it and it's not just accepting all data from any 1 arm and then you just run it'll send the data all yep so all the all your nodes that are running the show so low you just either Adler client . D sub are configuration or modify the client . b to add the data collector information and it'll start sending data of the the the you don't you wouldn't have to bootstrap them not with the new client at the arms stuff so you can if it if you already have like 10 thousand nodes you just need to run some script just iterates through them drops off a file in a certain location and it'll just work that's how we did internally just ahead knife ssh with harm to stuff but you can do in necessary cessation of any kind the what the right so it at any time the chef-client runs it sends the data so it's kind of like reporting so reporting it would it would seem that it is a reporting In instead as an indicator reporting it's just sending it to visibility so it runs when the chef-client so it depends on how frequently you run yes yes you you you you you you know you and the and the around the eyes and the
kind of some of the problems in this yeah I mean so basically I incorporates is are you thinking incorporates system-level information into the workflow as you're making builds a where where you thinking so I'm and you know what that was a good reason to I yeah the the the Yom that is that's right the wheelhouse of workflow like that is that is workflows bread-and-butter is catching those errors before they hit era important production systems so this was of Villar automate server is so yes not all the pieces of as of right now yes or no that in the future we were trying to design it to be more service oriented such that if you wanted to you could break them up but it comes as anonymous package with everything that you need so you can just install it and go and you will all alfalfa offer probably future more complex installation options but yeah right now it's everything is on the box and you know the a of this yes we felt that this no but I know you you know what it is and that the what that kind thing to not yet core but I think it's also questions in the back side yeah and so and the and right on the and so so I'm going to answer you I would provide an answer and you can let me know if it's the answered the answers your question on automated OK are the were visibility side automate can work independently of a shelf server so it can receive data from any number of chefs servers from automated self from requires a ship h single server In order to operate that is its own standalone instance so as part of the automate installation you need a shove server as of right now but visibility can collect data from any number of chefs servers is the automated system yes but you can use an existing just server as part of that it just means that so that further workflow side the users are search and all that kind of stuff for the back end on on the workflow side of things to be at of the right what is it about the the yet there in the chef server RB configuration you input the did this similar data collector configuration that you're doing for yourself clients end of the data that it would the chef server sends to analytics like all year Our cried operations and all that kind of stuff then gets sent to visibility the there is I don't know about the top of my head and you will have to use the if you have do and you the if you've yet if you then streaming the data to us than our arms the war on the side and again here not what do you going to the the V I and I think the specifics of that there there there are complexities that would become of that amount of data and that's something I know we're working on so it's a that's the the goal is yes to provide the odd time that be able to view you're the what has happened over a period of time so you still haven't quite know that don't it this said the yet so I added the back wheezing makes it really easy to eye dropper export data if you don't want it on the local ElasticSearch cluster on so basically with elastic if you want to either ship that Amazon glacier were you know put it somewhere else when you're done with that because it's out of the time window it's very simple to do that are more you can just say clean it up I don't want any more of a technical each day are we have plastic search indices for each day are so you can archive days of like ElasticSearch days are more clean them up we don't there's nothing built into it right now that's like other will automatically deleted so right now that would be a external steps but it's really easy to do the yeah I think we're wrong I yeah we're we're going to definitely be adding a lot around data retention policies both at the what data do I really care about which we anticipate will be different for different customers and then also the how long do I care about it for you know a measure of complexity in the reserve to support a lot of different retention policies in different types retention policies we wanna make sure we get that right so that we don't end up forcing you into retention policy that you don't pour can't fall the the as bad as it has been put at the top of my list I I say the question was when when will habitat data be in the system and that is on the top of my list of things to achieving Q 3 so far stay tuned as the the was what stories you it is of you know what this I as the time of this all you will have a lot of to all 1 now yeah and that's something we have yet that something we've been discussing so that's sort of what the would the
exports on the method look like I've for people who want to you either have regular reports sent to certain systems either by an API by e-mail like whatever that would look like under is something that were investigating we do you want to be API 1st as far as just allowing people to easily integrate with our back and and build out whatever they wanna build out on and so for the short term if you want a very specific reports are more with certain data fields in its you can do that using cabana where all of the events that were capturing are there you can organize sort them however you want to and create your own visualizations and lists on but then we do plan to add that into the product to make it very easy to do we acting the further the arm good good those that the date of the data they assign the value proposition is on Europe always have we'll have our unfettered access to the raw data that we collect on and then what we do is we have our own data processing pipeline were process that raw data in ways that we can disappear helpful for you and then we provide visualisations on that data that so that you can really quickly clean easily understand the the at the time of the right the the and it does give you and if anybody wants Lake more I lake 1 on 1 conversation about the product we do have the products we downstairs from 45 but if you wanna can it tell us about your use case and talk through it i its room 407 and this is a list of the the the I that that such 0 that's a that's a wider range because if you could if you're someone who runs a like a maintenance run list once a day or if you're someone that runs full sweets in orbit 30 minutes also we haven't been able to do that level of capacity analysis yet on internally we haven't really outgrown Our are installation yet which is a moderately size installation with the like 12 at 20 nodes that we have a on bomb but we haven't had a chance to do I'm sure there will probably offer some sort of capacity planning because I've I've built systems like that before it's a very important part of it on we just haven't had enough time to do that process the cool yeah in the back and but I think were on an and 2 something excel on Amazon with no it's but if we're once I have no idea the but it's it's actually relatively small the goods rights we are still some gigabytes the yes so I hear it so and this the place where and but this was the you talk about the top by and that the questions it that the yet in analytics doesn't have are back yeah I looked us so basically this is the next generation of our approach to analytics and so as we look at this how is it significantly the next step for us the 1st piece of that is that's it is that it allows us to get all of your data in 1 place where is analytics and manager both 1 server to 1 installation of the analytics platform and the served like that's a pretty big deal because you know you don't want to have a sacred segregated dia all over the place you want 1 place we can see all of your data and be able to programmatically understand what is the health of my infrastructures code on the next we have armed basically added significantly more have richer data from the sources that were collecting it from yeah so we people and yeah I In this work that so you have that and that that so the the the it and I mean the main idea the main feature that we currently don't have that analytics has is rules and notifications and said that the we have the data type which is just which is a a fire hose of data but then actually sorting that out and sending it to a specific endpoint is not there yet on so that is what we are trying to figure out exactly what people want out of that tool we weren't really happy in retrospect with the rules the notifications engine of analytics we think it was too difficult to use and so a lot of people were using it to its full potential it was really easy to get way too many notifications to the point they were may be useful anymore so we want to we are looking at exactly how we wanted to rules notifications and what that looks like and that kind of ties in to an earlier question from some was asking which was how do I get the data from here to people who need over port for example arm and then what does that look like and that's something that we're really looking into adding those the API question on but basically is it an API on can I just take this data programmatically every now and then and should it to another system and that's 1 thing we're looking at we're looking at perhaps as scheduled e-mail report of your own choosing on a regular basis on but were just sort of playing with different ideas and presenting them to customers and and seeing what will fit that use case side in the best possible way but also in a significantly more user-friendly way on and then the other big I thing we have around rules notifications is the problem with analytics was that I everyone can edit the rules notifications if they had access to analytics and that means that I you know you might want your developers to bill of right of ruler notification just for the service they care about but you don't want them to be a little delete your operations core rules notifications that you super care about so those are all use cases that we have in mind as we figure out exactly what we're going to deliver the yeah in the back this the the yeah but Tasha at
shift on a you to in you come and
I yeah I I mean I am very open to be on customer calls Mark and I think that a significant part of our job and stuff as well is talking to customers getting feedback before we build stuff and while we're building stuff on just to make sure that were really hitting the nail on the head when we deliberate the the and yeah yeah exactly i in the bank the system OK stretching so you the yeah we current so ball Elastic Search is amazingly all take and we anticipate that you could do a migration from the back end of analytics into this we have not productise that right now yeah in look core by any other questions well as I you may know from berries presentation earlier you can go and download a free trial this and give it a shot I'm on our website shift I O and we also have us some tutorials that will take you through using this site you want the check it out but you know it's a new products we're eagerly awaiting of feedback and so you know just see workers and how people wanna use it so I you know the more communication the better our were really interested in way think arm and we will be delivering a lot of new Austin features are pretty soon so I to get a shot let us know think and that was Mike said come talk to us in for instance the pope think you've fj
Maschinenschreiben
Datenmanagement
Mereologie
Analytische Menge
Biprodukt
Systemplattform
Neuronales Netz
Demo <Programm>
Punkt
Selbst organisierendes System
Hyperbelverfahren
Stoß
Schaltnetz
Mathematisierung
Automatische Handlungsplanung
Element <Mathematik>
Login
Konsistenz <Informatik>
Knotenmenge
Bildschirmmaske
Perspektive
Trennschärfe <Statistik>
Retrievalsprache
Optimierung
Schreib-Lese-Kopf
Touchscreen
Sichtenkonzept
Reihe
Profil <Aerodynamik>
Mailing-Liste
Biprodukt
Kontextbezogenes System
Quick-Sort
Integral
Flächeninhalt
Rechter Winkel
Chatten <Kommunikation>
Mereologie
Server
Projektive Ebene
Programmierumgebung
Fehlermeldung
Instantiierung
Aggregatzustand
Punkt
Gemeinsamer Speicher
Mathematisierung
Zellularer Automat
Kartesische Koordinaten
Konsistenz <Informatik>
Knotenmenge
Client
Einheit <Mathematik>
Gruppe <Mathematik>
Front-End <Software>
Visualisierung
Speicher <Informatik>
Drei
Konfigurationsraum
Attributierte Grammatik
Schnittstelle
Umwandlungsenthalpie
Nichtlinearer Operator
Lineares Funktional
Kraftfahrzeugmechatroniker
Datennetz
Datenhaltung
Template
REST <Informatik>
Reihe
Dichte <Stochastik>
Prozessautomation
Physikalisches System
Biprodukt
Ereignishorizont
Integral
Dienst <Informatik>
Druckverlauf
Mereologie
Server
Wort <Informatik>
Information
Verkehrsinformation
Aggregatzustand
Instantiierung
PROM
Punkt
Komplex <Algebra>
Synchronisierung
Rechenzentrum
Client
Gruppe <Mathematik>
Bildschirmfenster
Skript <Programm>
Default
Einflussgröße
Umwandlungsenthalpie
Nichtlinearer Operator
Sichtenkonzept
Computersicherheit
Datenhaltung
Stellenring
Ähnlichkeitsgeometrie
Prozessautomation
Biprodukt
Ein-Ausgabe
Frequenz
Konfiguration <Informatik>
Dienst <Informatik>
Verknüpfungsglied
Rechter Winkel
Grundsätze ordnungsmäßiger Datenverarbeitung
Server
Information
Ordnung <Mathematik>
Schlüsselverwaltung
Fehlermeldung
Instantiierung
Subtraktion
Quader
Selbst organisierendes System
Zahlenbereich
Analytische Menge
Knotenmenge
Informationsmodellierung
Datentyp
Luenberger-Beobachter
Elastische Deformation
Indexberechnung
Datenstruktur
Hybridrechner
Grundraum
Konfigurationsraum
Schreib-Lese-Kopf
Einfache Genauigkeit
Mailing-Liste
Physikalisches System
Binder <Informatik>
Elektronische Publikation
Packprogramm
Quick-Sort
Integral
Mereologie
Gamecontroller
Authentifikation
Streuungsdiagramm
Verkehrsinformation
Umsetzung <Informatik>
Vektorpotenzial
Punkt
Prozess <Physik>
Automatische Handlungsplanung
Analytische Menge
Systemplattform
Term
Übergang
Knotenmenge
Datenmanagement
Datentyp
Visualisierung
Datenverarbeitung
Softwareentwickler
E-Mail
Verschiebungsoperator
Analysis
Nichtlinearer Operator
Güte der Anpassung
Orbit <Mathematik>
Aussage <Mathematik>
Kanalkapazität
Mailing-Liste
Schlussregel
Quellcode
Physikalisches System
Biprodukt
Ereignishorizont
Quick-Sort
Softwarewartung
Generator <Informatik>
Dienst <Informatik>
Rohdaten
Datenfeld
Rechter Winkel
Mereologie
Server
Speicherabzug
Boolesche Algebra
Verkehrsinformation
Rückkopplung
Telekommunikation
Web Site
Gebäude <Mathematik>
Analytische Menge
Physikalisches System
Biprodukt
Kombinatorische Gruppentheorie
Shareware
Prozess <Informatik>
Front-End <Software>
Migration <Informatik>
Mereologie
Speicherabzug
Elastische Deformation
Instantiierung
Verschiebungsoperator
Schreib-Lese-Kopf

Metadaten

Formale Metadaten

Titel Chef Automate: Visibility Feature, Q&A Panel - July 13, 2016
Serientitel ChefConf 2016
Autor Duffield, Tom
Hartmann, Christoph
Dennard, Mark
Drew, Tasha
Lizenz CC-Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
DOI 10.5446/34622
Herausgeber Confreaks, LLC
Erscheinungsjahr 2016
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract Learn about the visibility feature of Chef Automate. Gain insight into operational, compliance, and workflow events. There is a query language available through the user interface and customizable dashboards. Insight into your network and development processes has never been easier.

Ähnliche Filme

Loading...
Feedback