Transcending Cloud Limitations by Obtaining Inner Piece

Video thumbnail (Frame 0) Video thumbnail (Frame 334) Video thumbnail (Frame 1764) Video thumbnail (Frame 2104) Video thumbnail (Frame 2451) Video thumbnail (Frame 3683) Video thumbnail (Frame 5331) Video thumbnail (Frame 5963) Video thumbnail (Frame 6341) Video thumbnail (Frame 7329) Video thumbnail (Frame 7611) Video thumbnail (Frame 7989) Video thumbnail (Frame 9866) Video thumbnail (Frame 11158) Video thumbnail (Frame 13568) Video thumbnail (Frame 19071)
Video in TIB AV-Portal: Transcending Cloud Limitations by Obtaining Inner Piece

Formal Metadata

Transcending Cloud Limitations by Obtaining Inner Piece
with De-Pack Choppa
Title of Series
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Release Date

Content Metadata

Subject Area
With the abundance of cloud storage providers competing for your data, some have taken to offering services in addition to free storage. This presentation demonstrates the ability to gain unlimited cloud storage by abusing an overlooked feature of some of these services. Zak Blacher is currently pursuing a Masters of Mathematics in Computer Science, and expects to be graduating at the end of August. He has previously completed a Bachelors of Computer Science, and a Masters of Science in Computer Science, having worked with the FIVES research group. He has held internships on the platform team at Sandvine Inc, and digital security team at Compuware Corp. Social Media: IRC: chalk on #wolf @ espernet
Different (Kate Ryan album) Data storage device Point cloud Energy level Vulnerability (computing) Physical system
Dependent and independent variables Key (cryptography) Computer file Multiplication sign Data storage device Bit Student's t-test Student's t-test Mereology Flow separation Neuroinformatik Universe (mathematics) Backup Point cloud Speech synthesis Data storage device Backup Quicksort Information security Representation (politics) Information security Physical system Spacetime Point cloud
Mechanism design Internet service provider Data storage device Internet service provider Point cloud Backup Pairwise comparison Vulnerability (computing) Point cloud
Computer file Multiplication sign Parameter (computer programming) Dimensional analysis Measurement Independence (probability theory) Revision control Mechanism design Different (Kate Ryan album) Band matrix Representation (politics) Cuboid Spacetime Data storage device Computer-assisted translation Physical system Addition Graph (mathematics) Block (periodic table) Structural load Data storage device Dimensional analysis Internet service provider Independence (probability theory) Group action Limit (category theory) Exploit (computer security) Band matrix Mechanism design Data management Process (computing) Information retrieval Revision control Point cloud Backup Quicksort Representation (politics) Spacetime
Revision control Existence Graph (mathematics) Computer file Integrated development environment Multiplication sign Data storage device Point cloud Spacetime Representation (politics) Spacetime
Implementation Functional (mathematics) Computer file Connectivity (graph theory) Image resolution Demo (music) Source code Data storage device Front and back ends Latent heat Core dump Species Software framework Utility software Service (economics) Demo (music) Interface (computing) Data storage device Database Bit Front and back ends Data management Function (mathematics) Vertex (graph theory) Point cloud Right angle Table (information)
Service (economics) Computer file Estimation Multiplication sign Demo (music) Core dump Data storage device Directory service Discrete element method Total S.A.
Server (computing) Computer file Information Demo (music) Normal distribution Core dump Bit Data storage device Multilateration Counting Total S.A. Mechanism design Different (Kate Ryan album) Right angle Quicksort
Injektivität Server (computing) Computer file Information Mapping Structural load Electronic mailing list Data storage device Core dump Representational state transfer Number Internet service provider Website Point cloud Software framework Aerodynamics Website
Point (geometry) Functional (mathematics) Hoax Computer file Link (knot theory) Code Demo (music) Adaptive behavior Data storage device Mereology Software bug Revision control Mechanism design Mathematics Different (Kate Ryan album) Aerodynamics Extension (kinesiology) Information security Form (programming) Physical system Module (mathematics) Covering space Common Language Infrastructure Information Adaptive behavior Normal distribution Data storage device Core dump Process (computing) Hash function Personal digital assistant Internet service provider Revision control output Point cloud Modul <Datentyp> Quicksort Object (grammar) Extension (kinesiology)
Logical constant Functional (mathematics) Computer file Multiplication sign 1 (number) Mathematical analysis Revision control Data model Roundness (object) Bit rate Spherical cap Different (Kate Ryan album) Business model Spacetime Vulnerability (computing) Physical system Dependent and independent variables Electric generator Key (cryptography) Binary code Data storage device Mathematical analysis Variance Bit Limit (category theory) Electronic signature Data storage device Point cloud Right angle Information security Freeware Metric system Bounded variation Electric generator Spacetime
okay it's just about one o'clock so i guess i'll get started as you can probably tell from the title of this talk it's about cloud storage i'm going to be covering a an api level design vulnerability in a few of the different cloud systems so won't do a quick
introduction to that my name is zach i'm a student at the university of waterloo like many of you guys here I've had a interested in computer security and applied security for a very long time and this is my second DEFCON and the first coming speaking of the DEFCON or any conference bigger than about 20 people so thanks hopefully I'll get that same response afterwards too little remains to be seen so I'm giving a talk on cloud storage and when I was before this talk was doing little bit of recon speaking some of my friends trying to find out what it is that they use cloud storage for and so a lot of them use it as a sort of a USB key replacement that you know they use it to share large 10 megabyte files or larger with friends or they use it for backups within our documents or they use it for availability and accessibility beyond yell across several devices really for the most part it replaces USB keys and a lot of them still treat cloud storage systems as their do the same way they trade USB keys they treat it as a large
container that just throw files into until it run out of space and then delete a few to free up a little bit of space afterwards but one of the cool things about cloud storage systems is they've got many more features than just space providing so I have a little chart
here don't if you can see it but it speaks about some of the additional mechanisms that these cloud storage providers have like history or backup retention or things like that and that's really what we're targeting with this so
the vulnerability the main discussion I want to have with this is the idea that
treating files as blocks filling up a large or larger box doesn't quite represent cloud storage when you have this time dimension so if we retry and reframe that previous picture with the as a storage space time graph we can as a Gantt chart really when we're adding files we have different time intervals that were adding them and then by removing files we can see that the lifespan of these files stops existing have a certain amount of time and then with this kind of representation we can think about the amount of space we're using as sort of a sliding bar so at any given time we are occupying a different amount of space so this gives us an interesting sort of mechanism with which we can recover previously deleted files so really what
we're talking about is that a lot of these cloud systems have a size invitation for their quota management system but have a time duration system for their history backup retention so when you have these two different independent quota management dimensions you really have unlimited storage because you can exploit you know you can exploit history retention to get additional amounts of space so really we're limited by our upload wider bandwidth rather than the upper limits we have with the existing cloud a cloud system existing cloud of parameters so what this tool does is when we're doing an upload of a large file we take a large file and we cut out into several smaller fragments and load these fragments as different versions of some arbitrarily new file and then we top it all off with a chunk of zero size this way that our quota accounting methods and see this as a 04 they actually see this as a zero size file despite having that history backup so retrieval is very easy for use this process all we have to do is pull all the versions then glue them back together with cat so going
back to this storage time graph I was working with earlier I used this to represent a file earlier but really what we can really treat it as is more like this where we have different versions of this of this file that together create that original file but are occupied considerably smaller amounts of space in existence so our account use is actually closer to 0 when we're looking at it from a different a different time so
it's fairly is the idea so I rolled it into a tool for you guys I call this tool Deepak chocolate you know running with this whole cloud environment thing what it does is it chops up files and then packs them and then Deepak's them afterwards so it's really it's a
vertical storage management framework what it does is I've created a puddle source framework that allows you to abstract out the api api implementation specifics of individual cloud storage utilities from this the tool also maintains a storage database backend for fragmentation for maintaining the history of the fragment 8i maintaining the table of fragrance maintaining the initial file that form these fragments and also provides a command line access or Clemente interface tool the core functionality of these individual components so I can talk all day up here but you guys really want to see a demo right right all right
so yeah a little bit of resolution
problems there is that better okay so
what I'm starting here is I don't have anything in this directory I'm just
showing you that there's no time nothing up my sleeves and I'm creating a 64 megabyte file that i'm going to upload to this service here's the check some of
it just saving that behind and then let's upload it one of the things i'm
doing here as a sort of a way to you
know what other things are trying to demonstrate here is that there are ways of circumventing an existing detection mechanism is kind of thing so what i'm doing here and you can see this year is that the file size for the individual fragrance is around about 512 k plus or minus five percent it's a normal distribution try and get around any sort of mechanisms in place to detect continual over right to the same thing now I'll get into this a little bit later there's a bunch of different techniques you can use to mask that we're doing this but for now this is demonstrated fairly well this is this information is generated by the d-pad tool itself it's showing you the individual trunks that belong to this file as well as the file size / upload them i'm going to use this to compare later on when i've got the information i'm getting back from the server this is all locally generated information so we're just about finished yeah you can see the second-last father it's about 200k just to top it all off and then the last one is zero size and you can see I've gone back into this folder this hat this check something is here the checksum I use here to act as the
handle on the existing for a framework takes up 20 sites so back to where we
were now if I deleted that binary and I'm busy reconstructing the file from the fragrance I'm getting back from the server so these chunk numbers you see here are the server is the information provided by the rest api that gives us the mapping to those individual chunks were looking at earlier if you compare this list with the list we had earlier it will see a one-to-one mapping of the file size of getting back here and file size as we sent yeah this is this is a specific to dropbox in this example but there's no reason it can't be extended to other cloud storage providers so I finished unloading it it exists there and we can see that the checksums match so we can actually use this for storage
the tool in the form that I used there is available on the CDs you guys getting as part of the packages here but I also have the updated version of the code on github at this link you can bug me for it afterwards and what I like about this toolkit and one reasons I wrote in python is to give us the extensibility for hiding from these detection mechanisms so for example we can maintain our own Delta's to map to real changes on the file size of file information rather than our fake are faking it through the API here we can also do a sort of adaptive mangling used different file names right now this tool just uploads with the get hash and uses that as the anchor point in the in the cloud storage system but there's no reason we have to use that so the future work won't cover is extending the CLI right now it just supports get input but you know it's fairly fairly simple functionality to continue working on there i also want to get some more modules done i looked at some other cloud storage providers just two or three that have some mechanisms place to defeat this but aren't particularly rigorous themselves so really only dropbox works at this stage but we can work on that right guys i also want to do some more tunable options so that we can look at different ways of Mecca not automating the process of generating the file fragments in this case I used a generator to generate 512k chunks with a normal distribution but there's no reason we can't move it across a whole bunch of different things I had it over right one file but there's no reason can't move to multiple files there's a whole bunch of different ways we can take this depending on any sort of tunable objects we want to use so this wouldn't be a security talk without the
implications of this kind of a vulnerability so if you look at the blue
team concerns for this is that it's fairly straightforward to detect this by looking at the constant file size writing and a time observing and the difference between the delta uploads but you know we can deal with this with generators by introducing subtle variations in the delay of the uploads of the different versions of these files we can also vary the name we call severe the file size and you know that's something we can counteract their initial response to this thing secondly you know it's fairly straightforward to ban an API key but you know it's again with insensibility request a new ones there's not going to limits the API are the available functions we can avail tues urgent rate just because one or two I'm secondly thirdly the one thing that is fairly evident is the null caps those zero size fragrance that are right at the end of the files that make them take up no space in the internal metrics they kind of that's a fairly obvious signature so we can really replace that by using something very small like a one-bite file which again you know by moving to one byte we don't really have unlimited space anymore but with you know two gigabyte storage we can spill the store have two billion files like this one of the reasons is asked of major concern to these companies is the fact that um having unlimited space really undermines their business model you know they have this whole drug dealer the first bits free kind of thing and that's getting unlimited storage really breaks their their financial incentive for these kind of things secondly you know by going the opposite way if they break large binary rights it will really be it'll really damaged a lot of the existing tools that use Dropbox already or any cloud storage system already for example I use and confess into dropbox and but you know that does a lot of binary modifications again and again and that will probably trigger very similar to the dpac tool finally I know that we've discussed this several times the variance talks about prism and everything but deep file analysis really time-consuming frowned upon but really it's more time-consuming than it is problematic for themselves so that's something that we can use to get around back so I thought through everything I wanted to say in about 11 minutes so I want to do some special thanks to some of my friends who helped me get to the stage to encourage me to do this and yeah that's all I have to say enjoy your lunches you're still speaking no not but yes you are not a lunches oh yeah this is a fun conference I forgot about that up here what do we call this shot the nude thank you why isn't we doing shut the new first time speaker what else do we need there right there so was first time at Def Con first time at Def Con sir all right all right come on up she was sitting next to him soon as this your girlfriend wife oh right congratulations all right here we go it is very hard to be chosen to speak at Def Con very competitive so big round of applause for our first time thanks a lot ok now you can now you can say you're done ok I'm done thank you