Bulletproofing The Cloud: Are We Any Closer To Security?

Video thumbnail (Frame 0) Video thumbnail (Frame 931) Video thumbnail (Frame 3000) Video thumbnail (Frame 4857) Video thumbnail (Frame 8701) Video thumbnail (Frame 10554) Video thumbnail (Frame 11210) Video thumbnail (Frame 13264) Video thumbnail (Frame 14675) Video thumbnail (Frame 16420) Video thumbnail (Frame 19490) Video thumbnail (Frame 25397) Video thumbnail (Frame 31642) Video thumbnail (Frame 33989) Video thumbnail (Frame 39698) Video thumbnail (Frame 40591) Video thumbnail (Frame 41228) Video thumbnail (Frame 47366)
Video in TIB AV-Portal: Bulletproofing The Cloud: Are We Any Closer To Security?

Formal Metadata

Bulletproofing The Cloud: Are We Any Closer To Security?
Title of Series
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Release Date

Content Metadata

Subject Area
Cloud security has come into focus in the last few years; while many ways to break the cloud have been proposed, few solutions have been put forward. This talk is primarily a conceptual discussion on how cloud providers can and should be (but probably are not) protecting both their own and their clients' assets in their cloud implementations. It will discuss the known issues with cloud, and a readily available proposed solution to some of these issues. The presentation will conclude with a demonstration of an actual implementation of this theory at a cloud hosting provider. An understanding of basic network security technology is required. Ramon Gomez is a Security Professional working for a cloud hosting provider. He has been working in correlation theory for the last 8 years, including time spent working at a prominent North American vendor of SEIM software, providing theory and logic to improve the correlation capabilities of the product. His primary areas of professional expertise are in correlation theory, and Intrusion Detection/Analysis.
Cross-correlation Intrusion detection system Penetrationstest Cross-correlation Intrusion detection system Internet service provider Software testing Cloud computing Coma Berenices Information security Information security Host Identity Protocol
Point (geometry) Ocean current Service (economics) Dependent and independent variables Presentation of a group Service (economics) Kolmogorov complexity Cloud computing Mass Data dictionary Type theory Internetworking Integrated development environment Different (Kate Ryan album) Personal digital assistant Quicksort Information security Spacetime Task (computing)
Server (computing) Presentation of a group Game controller Service (economics) Firewall (computing) Cloud computing Port scanner Instance (computer science) Mass Client (computing) IP address Independence (probability theory) Virtual reality Integrated development environment Series (mathematics) Information security Multiplication Vulnerability (computing) Exception handling Enterprise architecture Service (economics) Vulnerability (computing) Dependent and independent variables Firewall (computing) Content (media) Plastikkarte Mass Client (computing) Cloud computing Virtualization Instance (computer science) Integrated development environment Personal digital assistant Internet service provider Information security Freeware Spacetime
Context awareness Decision theory Client (computing) Turtle graphics Focus (optics) IP address Cross-correlation Cross-correlation Semiconductor memory Encryption Gastropod shell Information Traffic reporting Information security Vulnerability (computing) Block (periodic table) Complex (psychology) Client (computing) Correlation and dependence Cloud computing Line (geometry) Type theory Process (computing) Integrated development environment Software Personal digital assistant Internet service provider Encryption HTTP cookie Information security
Server (computing) Sweep line algorithm Dependent and independent variables Variety (linguistics) Multiplication sign Client (computing) Open set Login Raw image format Event horizon Web service Response time (technology) Natural number Pressure volume diagram Series (mathematics) Information security Dependent and independent variables Email Forcing (mathematics) Mathematical analysis Computer network Cloud computing Incidence algebra Price index Type theory Software Personal digital assistant Internet service provider Quicksort Information security
CAN bus Dependent and independent variables Service (economics) Software Dependent and independent variables Personal digital assistant Internet service provider Multiplication sign Internet service provider Port scanner Incidence algebra Event horizon
Group action State of matter Multiplication sign Client (computing) Mereology Web 2.0 Response time (technology) Core dump Series (mathematics) Information security Physical system Vulnerability (computing) Proof theory Enterprise architecture Service (economics) Simulation Firewall (computing) Internet service provider Bit Cloud computing Control flow Type theory Internet service provider Configuration space Normal (geometry) Quicksort Spacetime Dataflow Game controller Server (computing) Enterprise architecture Service (economics) Sweep line algorithm Variety (linguistics) Line (geometry) Firewall (computing) Login Event horizon Natural number Intrusion detection system Energy level Integrated development environment Vulnerability (computing) Variety (linguistics) Information Content (media) Client (computing) Computer network Database Sign (mathematics) Event horizon Software Integrated development environment Personal digital assistant HTTP cookie
Computer program Building Decision theory 1 (number) Client (computing) Open set Mereology Independence (probability theory) Different (Kate Ryan album) Core dump Flag Vulnerability (computing) Physical system Service (economics) Simulation Mapping Point (geometry) Open source Variable (mathematics) Type theory Cross-correlation Internet service provider Configuration space output Physical system Spacetime Point (geometry) Trail Dataflow Server (computing) Sine Enterprise architecture Service (economics) Open source Variety (linguistics) Firewall (computing) Patch (Unix) Control flow Online help Mathematical analysis Twitter 2 (number) Cross-correlation Intrusion detection system Energy level Utility software Software testing Proxy server Information Server (computing) Weight Debugger Projective plane Mathematical analysis Independence (probability theory) Coma Berenices Computer network Cross-correlation Client (computing) Login Database Line (geometry) Antivirus software Uniform resource locator Integrated development environment Software Intrusion detection system Personal digital assistant Password Gradient descent Flag
Axiom of choice Scheduling (computing) Multiplication sign Decision theory Port scanner Client (computing) IP address Cross-correlation Software bug Heegaard splitting Malware Semiconductor memory Different (Kate Ryan album) Core dump Collision Series (mathematics) Information security Vulnerability (computing) Physical system Enterprise architecture Software bug Theory of relativity Trail Decision theory Firewall (computing) Open source Electronic signature Message passing In-System-Programmierung Configuration space Right angle Pattern language Quicksort Trail Dataflow Server (computing) Open source Link (knot theory) Variety (linguistics) Firewall (computing) Real number Login Event horizon 2 (number) Session Initiation Protocol Centralizer and normalizer Propagator Cross-correlation Cache (computing) Hacker (term) Intrusion detection system Message passing Buffer overflow Variety (linguistics) Weight Cross-correlation Line (geometry) Incidence algebra Symbol table Component-based software engineering Cache (computing) Uniform resource locator Event horizon Integrated development environment Software Intrusion detection system Personal digital assistant Collision Buffer overflow Computer worm
Group action System administrator Multiplication sign 1 (number) Port scanner Client (computing) Cross-correlation Web 2.0 Type theory Blog Different (Kate Ryan album) Single-precision floating-point format Videoconferencing Information Physical system Enterprise architecture Email Cloud computing Flow separation Measurement Connected space Type theory Normal (geometry) Sweep line algorithm Trail Server (computing) Tournament (medieval) Login Event horizon Number Session Initiation Protocol Cross-correlation Intrusion detection system Energy level Computer worm Communications protocol Time zone Multiplication Information Server (computing) Correlation and dependence Cross-correlation Client (computing) Computer network Database Line (geometry) Limit (category theory) Single-precision floating-point format Event horizon Software Intrusion detection system Communications protocol Computer worm
Point (geometry) Structural load Multiplication sign Shared memory Content (media) 3 (number) Client (computing) Integrated development environment Software Quicksort Series (mathematics) Sinc function Physical system
bulletproof in the cloud so first off
who am i ieave been working infosec for about 11 years and i had about five years a hobby work before that the primary interest for me are penetration testing intrusion detection and lob correlation I'm currently employed as an infosec generalist at a cloud provider and I've previously worked at several fortune 100 companies after this talk if you should need to reach me you can get me at blinded science at gmail com so
what is this the idea of cloud is kind of it's not new anymore a lot of people really understand what cloud is at this point the problem we have is less of an understanding issue and more of knowing how cloud affects what we do day to day a lot of the companies that are in the cloud space are approaching it as a very traditional type of environment they treated just like they would any other environment they put servers into but my research that built this particular presentation is primarily response to last year's presentation which was cloud computing a weapon of mass destruction I'll get into that a few minutes and a dissection of what the current security postures of the cloud providers out there and a proposal for what they could do to improve their posture so what is the cloud and this is a very dictionary type response to that question I'm just gonna let you read it i'm not going to go into it in depth but in general people as they understanding cloud it's the use of shared resources to accomplish some sort of task and we've got a variety of different things we gots as we've got infrastructure as a service for the purpose of this talk I'm really focusing on cloud as infrastructure as a service or saz in a lesser case just to keep everyone on a
lighthearted side I want to keep this fun so cloud is very serious business for a lot of people and for us this is a picture of a kitten
alright so first off last year we had to talk about a weapon of mass destruction but prior to that we had the DEF CON 17 talk which was clobbering the cloud clobbering the cloud really focused on using our I shouldn't say it focused on but it had an example of using the Salesforce environment to build a Nick to based scanner so they showed a tool called SIF to that was run from the Salesforce environment to scan remote servers which obviously is not what that was designed to do last year though we had cloud computing weapon of mass destruction and again like I said this talk is primarily built as a response to that particular talk during that talk they showed that cloud providers essentially aren't really doing much internal policing of their clients additionally there's kind of an unofficial policy while they have an official policy saying there's no scanning from the cloud the unofficial policy is as long as complaints aren't received nothing's going to be done about it so what's wrong with the cloud
why is there a problem here at all the first problem we have is easy access most cloud providers build their environments so people can get into it and out of it very quickly and easily that allows us to have other problems such as a non enemy in fraud and the issue of being an anonymous isn't necessarily a problem except for the fact that it leads to fraud a lot of people use cloud resources and I'm sure we've probably got a few of you up in here who have used cloud resources in ways they weren't intended to be used and some of that's fraudulent some of it isn't that leads to another problem though you end up with contention for resources again cloud really is an environment that's meant to be shared so when you have one client who's misbehaving it can affect the others through contention the cloud provider really faces problems with damage to the infrastructure as i mentioned fraudulent customers fraudulent customers frequently turn out to be using false credit cards that can't be charged or get charged back so that you've used cloud resource without paying for them and they're kind of proven inability to address their own security as based off of the previous presentations the client however faces another series of problems and if you are cloud consumer these things should concern you one compromise client of a multi-tenant environment can affect others and that gets back to contention the larger problem though for most of the customers is that they are no longer in control of their entire IT infrastructure they've offloaded that to someone especially against as and I is sorry iaas so that a user who's compromised may not have any capability of knowing their compromised and may never know their compromised so I'm going to get into now what most the cloud providers are doing in this space the providers are treating cloud security traditional hosting environment that's very straight forward that seems logical in most cases clients are given a virtual firewall with inline IPS services and this is getting to the IaaS providers frequently offer vulnerability assessment for free and this is great it shows the providers are at least somewhat concerned for their clients and the fact that are giving away the service in a lot of cases is a good positive move forward each client's virtual instances independent which means the clients are essentially fending for themselves with no coordinated enterprise security so the
problems with the conventional solutions that a lot of cloud providers are doing IPS first of all it's very difficult providers to offer a prepackaged IPS that works for all clients won't block legitimate traffic and that's pretty straightforward when you put together an environment where people can drop in and out of it very quickly it's tough to give them something that's a cookie cutter that will work for everyone IPS in particular is a danger because you're talking about blocking traffic which in some cases may be legitimate traffic information coming from an IPS is frequently incomplete so you have encryption that keeps you from seeing the data inside the IPS and the lack of what I've awareness what's going on on the endpoint the last problem with IPS is that it has to work speeds very complex correlations aren't possible what i mean by very complex a lot of the IPS solutions that are out there do a really good job of doing vulnerability assessment to IPS type correlation and other basic correlations when I talk about complex correlations I'm talking about behavior correlations to go back two months the IPS can't keep that amount of data in memory in some cases or it can't look at it quickly enough to make line speed decisions so it has to make sometimes incomplete decisions then we get to the problem of
a traditional network design really in a traditional network design and I'm sure as security practitioners we've all dealt with this in the past you have the turtle shell approach hard on the outside soft on the inside so you're focusing on your external threats you're assuming the internal hosts are trusted and beyond that your clients really aren't benefiting from security that's being generated by the other clients so
how is that working now I what I did is I have the access to a series of logs from a variety of my clients so I took those logs and I did probably what isn't exactly a scientific analysis of it but I looked for were the large cloud providers hosts that recurred from those large cloud providers and how long it took after i sent a automated alert to that provider that there was something wrong your network for them to respond to it what I had to do is because most client I'm sorry most providers don't respond to those types of emails I had to make guesses on how they are performing on their side that may lead the worst case scenario so I would look for a host that occurred if I sent out an email and they responded to it I would never see that host again that data got thrown out because I couldn't be sure also if I saw a host and I saw the host again and then I never saw the host again after that I couldn't be sure that the provider actually fixed it I again I just had to make guesses based off the data I was seen so I can't say for certain what the security posture is inside of a company but I can guess the nature of the security posture based on behaviors of the network and their personnel guesses were based on how frequently a particular host contacted my network and how long it took for it to stop so all my data is from the first six months of 2011 the first one it's amazon
web services now amazon web services i had a single recurring host from and given their size that's probably a really good indicator that they're very responsive to emails or complaints about any sort about host on their network this is the raw data and I'm gonna explain what this is here this is coming from a essentially ASM each of these is two entries related to one event and i color-coded them so you can understand there are two different things in this particular case what this is is it's a sweep of my network for open tomcat servers and then following up an attempt to brute force those servers so based on this we know Amazon's response time to complaints and incidents is probably at least 14 and a half hours because that's the amount of time that occurred between the two incidents and when I complained about them to Amazon next up is a
rackspace which acquired slicehost there were 10 recurring host from rackspace now I should go into this noting Rackspace I can't tell from the outside unless I do a scan of their network whether these are actually cloud devices or traditional devices but again this is more on their responsiveness to complaints and less about their cloud particularly so this is an event from rackspace that represented an ssh scan of my network Rackspace's response wasn't as good as Amazon's but I'm sure a lot of us dealing with Incident Response know that 48 hours it's probably a little longer than we want it to be but it's not completely unreasonable because a lot of cases you're trying to contact customers verify what you're seeing make sure there was actually an abuse of the network going on and sometimes that can take some time the next I'm going to show you is from a provider where and again I know some of you are in the crowd here if you're looking for services that won't be shut off no matter how bad you are go to softlayer
so I had five recurring host from softlayer all of them spanned multiple days and as far as I can tell softlayer never responds the complaints or incidents or at the very least you can measure their response in months I had
two pieces of data here now I'll clarify the first one is actually an ICMP sweep that they do of my network every month the second one is an SSH sleep so I can understand that maybe if you get a complaint about an ICMP sleep I'm not really sure why softlayer feels the need to scan my IP space once a month and ignore my complaints about it every month but I can kind of understand that someone may say okay well it's just not cmp sweep I'm not too worried about that I'm going to ignore it but if you look at the other case it's a little more obvious the ssh scam it took them nine and a half days to respond that's a week and a half that's not the greatest response time and I can't even be sure they shut the host down the host may have just given up trying to talk to me so how can we tighten it up clients should have their own ideas firewall but hosts that are attacking multiple clients should be detected and shunned by the provider clients aren't capable of sharing information with each other or if they were it would take a lot of effort on their part the provider is actually in a unique situation where they can see all the traffic on the network if they try so as a provider if you've got a series of systems on the outside that are attacking pretty much all your customers those Custer's may not have any idea that they're part of a coordinated attack but you should because you can see all of it so the providers should be taking steps to help the clients protect themselves they're not the providers should also be looking for intentionally malicious internal clients consulting events from all
client environments look for enterprise threatening external agents improves things from the outside but actually if you look at how the providers are attacking secured in the cloud the single largest unaddressed threat is a client networks the client networks are danger to both the provider and the other clients again because of contention and damage to the infrastructure so why aren't the providers doing this or what challenged the providers being faced with well the first one is frequent rapid client changes the nature of cloud is such that clients come and go all the time you can't really be sure what they're doing with your network and they're probably all doing something different altogether clients are gonna have a wide variety of services users and ways of utilizing their resources your clients are in unknown state and by unknown most of the clients are probably gonna be normal law-abiding citizens they're going to sit on your network use your resources maybe run a web server or database server things like that but you can't be certain of that some of the client is going to be bad guys some of the clients aren't going to be bad guys but they're going to be compromised by bad guys so you've got a whole host of clients or an unknown state if you're going to take action on incoming traffic you need or even outgoing traffic you need to be as close to zero percent false positive as possible you take an action on behalf of the clients right a client in his environment knows what he's expecting to see he knows he can probably be pretty certain that a certain type of traffic does not belong in his network user provider can't be a certain because you're trying to put together a cookie cutter sort of solution that works for everyone so you need to make sure that you're as close to zero percent of false positive as possible if you're going to be taking action on these things what stays the same an inland IPS owned and controlled by the client there's no reason why you shouldn't give them that firewall again owned and controlled by the client in most cases these are virtualized firewalls for the cloud providers vulnerability assessment we leave that in all this stuff is well understood technology and it allows clients baseline control over their own networks within the cloud a lot of clients are going to use this stuff a lot of clients cloud are using cloud specifically because they don't want to have to run their own IT infrastructure and their level of expertise is a little bit lower than what yours or other security practitioners might be so they may not use the technologies at all but at least you're providing them alright so what do we add to the infrastructure how do we make this system how we start building the system that will protect our clients and us from potential damage this seems pretty straight forward to me doing that flow inside your environment understanding what traffic is there what traffic should be there and kind of getting a baseline for what things are supposed to look like we should be ending an enterprise-wide IDs now this particular ideas should be completely out of the visibility and control the clients because again they're in an unknown state you don't want clients being able to see you know I am watching you and going to take action if you keep misbehaving you don't want them to see that you don't because if they can see it then they can try to evade it they're going to try to evade it anyways but at the very least you're keeping that within your own eyes network access control this is pretty straight forward to it's a little different for cloud than it would be for a traditional environment and I'll get to that in a minute you're going to throw an event core later and this is what's traditionally known as a sim for my use here we're talking about a forensics analysis sim rather than a regulatory sim as most people are familiar with it log consolidation and on access and Miss configuration detection that phrase just means watching the network for new servers and services and checking them for very basic misconfigurations all
right so when I talk about some of these things a lot of people ask me why not use OS sim and OSA my throne the URL up here in case anybody wants to investigate open source Sims but osm uses many of the same tools i'm suggesting in problem i have with and the reason why i don't personally use it is because it makes assumptions about the network is placed into and that just means that the tools it's using are all prescribed you when you use asim you know you're going to use you know the asset database i guess the base database is your system for managing data and all the tools that go along with OS sim additionally o SMS correlation engine is not as flexible as SEC which is what I'll be talking about in a minute but o Asim does have advantages if you're looking for something where you won't have to manage it yourself part of the problem with building something yourself is that you've got to deal with updates and keeping track of what's vulnerable osm of course takes care of all that for you so with net flow in particular i like to use tool called NF dump net flow there are a lot of analyzers out there in the open source community by the way all these tools i'm going to talk about I specifically chose open source tools for this talk because I wanted to make sure that if any of you want to replicate this you could but there's a lot of net flow open source program projects out there most of them are inactive I chose NF dumped from the variety of inactive projects or semi active projects mostly because it's really all it does is it throws everything into a database and it provides you a lot of command line tools from enabling that database they have a front end for called NF sin that I don't personally use but it's convene for scripting so net flow as we know is used to monitor flows into and out of the network we can use it to monitor for excessive prolonged network utilization it can also be used a trend network performance and flag suspicious spikes data for descent from internal switches and other network devices for analysis again you're not relying on customer environment so you're not relying on their virtual firewall you rely on all your infrastructure devices to give you this data and it can help provide network server and service inventory data for keeping track of what's running inside the client networks and your network I like snort for enterprise-wide IDs URL for those of you who haven't looked at it before but I'm sure most of you have probably seen this it's well-known and widely used it's independent of the clients they cannot see it it's attached to the network egress and ingress points there are no trusted networks snort you've got that homenet variable we're not setting that to anything we're looking for everything everything is untrusted design and this will also provide you some network server and service inventory data packet fences the knack I've chosen to use and pack offense has a discussion or sorry I talk going on tomorrow in case any of you are want more information than women i give you here there's the URL now neck breaks down into two different types of technologies it's pre-admission and posted mission pre-admission is probably what most of you were running corporate networks are familiar with pre-admission max they look for when a host comes onto the network to make sure there's things like patch levels updated antivirus all the configuration controls that you're expecting to be on the server or host and allow this server to come into the network based off of those things being in line with what you're expecting the post admission devices on the other hand what they're doing is are looking for behavior once the host is on the network ideally you're doing both for corporate environments it's much more difficult deal pre-admission in a cloud environment because you know the clients really could be anything the client is buying space from you so reasons for post admission behavior quarantine it's going to take input from other systems and use it to make decisions to quarantine devices syslog-ng which I'm
pretty sure most of you have been exposed to at some point again it's well known widely used all your infrastructure devices your servers which is IDs etc are logging here again these are not your customer devices because the customer devices they have control over and can do weird things to and part of what you're trying to protect yourself and your clients from are these client environments okay the on-axis miss configuration detection so I use a variety tools for this I use Medusa for the basic brute forcing and I have a list of like 10 user names and 13 passwords i use should run a scan within about 30 seconds metasploit for some more advanced things and then nmap i use nmap primarily look for our customers who have bought firewalls from us but aren't using them it's my way of saying hey you set you know and this has happened customers come into our environment and set allow all while they're testing just leave it that way I'm sure you all know that's a nightmare the clients may not so the end map is used to detect that and warn them hey your firewalls turned off right now I have a few others frauds and ends there specialized things like looking for open proxies stuff like that but these are the ones that are the workhorses of this particular system these tools are called by the correlation system to run basicness configuration checks of any new servers and services seen on the network and that comes from the NetFlow data as well as a snort data find that we have the
where the magic occurs which is the correlation system now Mike relations are sort of choice is the simple event correlator it's a pretty simple non vendor-specific correlation system it'll keep track of events from a variety of sources it's not in line so this is a big difference from the inline IPS we're not in line here so i can make slow well-informed decisions i can keep all kinds of thing in memory as long as i have enough memory to hold it and i can spend minutes making the schedules instead of having to spend split seconds the SEC system is in charge of coordinating everything else in the environment alright so how's this work we've got here our rather geeky looking correlator our cloud with our client environment and our outgoing ISP link this symbol in the this spot right here is just this is where our NetFlow data is coming from I'm using it to generically represent our switches and various Network Devices our firewall and down below we have the triple a server on the outside we have the enterprise wide IDS again running independent of the client from time to time our correlator will fire off our on access miss configuration detection as well as our vulnerability assessment this is used to keep track of what's going on with the client systems to give us an idea of where there might be security problems so let's say we have a customer start sending out an event right these are any kind of malware could be worm propagation could be a scam all these devices would be configured to send logs back to a central location or correlator in this case additionally the two network style devices the switches the net flow and our firewalls would send our net flow data back to the core later hopefully this should allow our correlator understand there's a threat going on fire off the knack which will in turn off the server and the malware but how does that occur I'm gonna go through a few scenarios real quick on easy ways that this can be done towards the end of the talk I'll talk about some more complex ways to detect misbehaving hosts but here we have our IDs our firewall and our switches and these are our happy clients well throw a land a bad guy so someone buys a server from us they've turned on and they start doing bad stuff now this particular scenario our bad guy has started using a known hacking tool to start attacking outside the environment this is actually a really simple way to detect them they when I see most frequently with this is sip scanners so the external IDs should understand that there are some signatures related to specific tools that are known to be bad detects that notifies the correlator correlator says hey there got malware in here shuts down but there are other scenarios in this case maybe we have a pattern of traffic going out mount a series of packets that all together or individually i should say don't mean anything but altogether represent some sort of larger pattern in this case let's say we've got an arp storm going on on the inside we could capture that with net flow the net flow would tell the correlator something weird was going on the correlator would have to make its own decision about that NetFlow data and again shut the host off now a quick note about NF dump and this
is common with all NetFlow data unusual traffic patterns alone don't dictate an incident what needs to be done is nfm data has to be compared with IDs firewall and other data look for anomalies an example of this would be a traffic beak combined with arp collision messages coming from your switches could be indicative of an ARP cache overflow and I should probably say that there it's highly indicative of an ARP cache overflow a traffic be combined with many IRC events probably some sort of bought that participation as for the correlated
IDs logs there's a lot more information there but it's limited what we can see a single event type if it entered your server and then it was replayed by that server outbound several times might be a worm it might be email but it might be warm if a server contacts excessive servers using the same administrator protocol so it's scanning outbound for like SSH protocol scanning I just threw
this one up pretty quickly earlier today to give you guys an idea of the types of correlation these are the most common ones I catch on my network the red ones are the ones I've marked is being close to a hundred percent of chance of bad stuff going on the rest are things where I would send out an alert and have an administrator come and take a closer look the first ones this week so sleep is just where a vent is played from a host across multiple other hosts pretty straightforward it's usually indicative of inventory a network sometimes protocol scanning for any of things a scan is where one host contacts one host and just plays a bunch of different types of events that's like Nessus Nick to those types of tools a storm is one loud noisy host generating a lot of traffic that may or may not be something bad sometimes it's just someone sending out a bunch of email could be any number of things baseline Delta is a much more complex correlation what you're correlator should be doing is keeping track of what's normal for your network that's both for the types of events that occur on your network how clients normally perform on your network and what the network traffic looks like from the NetFlow data base line Delta that is just a general name for anything that deviates significantly from that say 150 to 200 percent spike in traffic new snort event types you've never seen before things like that the worm I talked about briefly you have an event come in and then play out multiple times web scan is just a like a nick to style scan it's also when you've got a lot of 40 40 43 zone things that are not really completely normal for web traffic behavioral I probably shouldn't actually include it here behavioral is based off the snort behavioral events admin protocol SSH scans attack tools or known attack tools tools that uh like the sip I mentioned before the sip scanners if this is a big one by the way and zombies which as I mentioned before it can sometimes be detected using IRC when you've got a big peak in network traffic there's limitations this kind of solution the first one is you have to err on the side of caution there may be traffic that looks very very bad but a customer is expecting it to be a lot of your network because it's totally normal getting back to the example of IRC connections say a customer has been using an IRC server on their system for a while or even if they have it they just turn one up and then they suddenly start streaming out a lot of video those by themselves aren't very bad but you've got a peek combined with a bunch of IRC traffic that could look like a zombie so you got to be really careful what you turn on we take action will you turn off will you take action on the system is primarily reactive you may be taking action after damage is already done the goal the system isn't so I PS has meant primarily to prevent these types of things right the golden systems is slow the attacker down it's somewhat similar to a honeypot but different in scope what I found is that most attackers aren't targeted attackers they're opportunistic attacker they're looking for a specific thing that's vulnerable on any server anywhere and they don't really care what it is or I should say don't care which server it is there's one in so they tried the same thing against a bunch of different systems this system is primarily designed to keep those kinds of guys out they start scanning they attack host client number one and with any luck they are unable to attack anybody else after that because my enterprise level system is detected it without the client needing to take any action at all so to conclude cloud providers really don't appear to be in tournament police in their clients networks at all and really they should be taking reliable measures to detect both malicious clients and compromise clients and I guess at that
point I'll take questions no III I have the advantage of since I'm from a corporate environment I have an excellent marketing person here who did the majority of the visual design for me I'm nowhere near this talented well the goal of the system is too I guess there's no more danger of a client compromising another client when we talk about compromise I mean breaking in from one client environment to another client environment then there is from an external environment breaking into that same client environment the concern for client to client is primarily one of contention so when a client's and this is another thing we've seen ARP storms all right they start broadcasting just a lot of arps trying to bypass the switches what that does is it puts a load on the network and so that this is a shared network everyone else has less network throughput what we do is we detect those sorts of things using an internal series of IDS's that I didn't really talk about here or NetFlow data anybody else no okay thank you for sitting through I appreciate your time