We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Ruby for Home-Ec

00:00

Formal Metadata

Title
Ruby for Home-Ec
Title of Series
Number of Parts
67
Author
License
CC Attribution - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language
Producer
Production PlaceCincinnati

Content Metadata

Subject Area
Genre
Abstract
Come learn how to design your own algorithms and code to solve problems around the house. Trying to use your scrap wood efficiently? Want to sort your pantry to maximize variety? I’ll talk about the problem-solving process, walk through code, and touch on the computer science behind it all.
14
Thumbnail
44:49
34
Thumbnail
46:50
58
Convex hullWhiteboardWhiteboardData managementObservational studyFilm editingField (computer science)DiagonalSpacetimeMultiplication signDemosceneBlock (periodic table)SoftwareBitAdditionSoftware engineering2 (number)LengthVideo gameComputer programmingAlgorithmLevel (video gaming)Slide ruleFigurate numberComputer animationEngineering drawing
Electronic meeting systemLengthNumberFilm editingWhiteboardMultiplication signDifferent (Kate Ryan album)Clique-widthCodeMultilateration
WhiteboardLengthDedekind cutCodeLine (geometry)MultilaterationVideoconferencingMultiplication signBitWhiteboardLengthLatent heatFilm editing1 (number)Electronic mailing listFunction (mathematics)Loop (music)MetadataGreedy algorithmReverse engineeringApproximation
WhiteboardWhiteboardSet (mathematics)Order (biology)outputLengthDifferent (Kate Ryan album)Film editingLink (knot theory)Lecture/Conference
Film editingLengthReal numberOrder (biology)Classical physicsWhiteboard1 (number)Revision controlWeightKnapsack problemTheoryDifferent (Kate Ryan album)AlgorithmLimit (category theory)BitData structureComputer scienceCombinational logicDiagramSocial classNumberComputer animation
WhiteboardElectronic data interchangeLengthDedekind cutAlgorithmLengthGoodness of fitWhiteboardGreedy algorithmDifferent (Kate Ryan album)ApproximationsalgorithmusNumberCodeKnapsack problemWeightRevision control
Mach's principleElectronic meeting systemTotal S.A.SubsetKnapsack problemWhiteboardNumberRevision controlMereologyGroup actionWeightFilm editingVirtual machineAssembly languageCASE <Informatik>SummierbarkeitInstance (computer science)Dimensional analysisBounded variationConstraint (mathematics)SoftwareFigurate numberTwo-dimensional spaceComputer animation
Server (computing)Knapsack problemNetwork topologyMathematical optimizationWeightTotal S.A.Equals signLimit (category theory)Link (knot theory)Computer hardwareSpacetimeFilm editingResource allocationServer (computing)SoftwareTask (computing)Different (Kate Ryan album)AlgorithmHard disk driveChannel capacityPhysicalismWhiteboardFunction (mathematics)outputBefehlsprozessorSoftware developerComputer animation
1 (number)Type theoryArrow of timeCodeDifferent (Kate Ryan album)NumberQuicksortFamilyGreatest elementSpacetimeState of matterVariety (linguistics)Order (biology)MultiplicationRight angleRow (database)Graph coloringCASE <Informatik>Spring (hydrology)Goodness of fitRevision controlMultiplication signXML
Electronic data interchangeBitCodeMathematicsVideoconferencingoutputMappingData dictionaryType theoryNumberQuicksortGreatest element
FingerprintAdditionRight angleSubject indexingTotal S.A.IntegerNumberType theorySpacetimeDifferent (Kate Ryan album)Position operatorMultiplication signQuantum state2 (number)Division (mathematics)Fraction (mathematics)Computer animationLecture/Conference
Type theoryDivision (mathematics)Electronic mailing listRound-off errorFunctional (mathematics)NumberMultiplication signMathematicsLibrary (computing)MereologySquare numberApproximationLengthSpacetimeCASE <Informatik>ResultantInformationRight angleIntegerError messageMultiplicationComputer animation
EmulationElectronic meeting systemApproximationType theoryOptimization problemImage resolutionEnergy levelBit rateUniformer RaumLatent heatPoint (geometry)Data structureSimulated annealingProcess (computing)Weight2 (number)Food energyComputer animation
Energy levelMaxima and minimaApproximationLine (geometry)Graph (mathematics)State of matterPoint (geometry)Mathematical optimizationFood energyUniformer RaumMathematicsSimulated annealingData structureCartesian coordinate systemHydraulic jumpAlgorithmComputer animation
Simulated annealingWhiteboardSpeichermodellConstraint (mathematics)Shape (magazine)Neuroinformatik
Menu (computing)Simulated annealingComputer-generated imageryIterationNumberProcess (computing)NumberSimulated annealingHacker (term)Medical imagingSoftwareMobile appPublic domainIterationProjective planeQuicksortComputer animation
Convex hullSpacetimeDiscrete groupSimulated annealingMathematical optimizationGradientGradient descentControl flowCodeTask (computing)Process (computing)Order (biology)Medical imagingComputer scienceComputer programmingWindowFunction (mathematics)ApproximationIterationType theoryTheoryDifferent (Kate Ryan album)Simulated annealingMultiplication signCombinational logicLatent heatAlgorithmForcing (mathematics)MereologyRadical (chemistry)Bit2 (number)ResultantImage processingObject-oriented programmingComputer animation
Game theoryWhiteboardComa BerenicesSlide ruleTouchscreenWhiteboardFilm editingOrder (biology)MereologyElectronic mailing listCartesian coordinate systemSet (mathematics)Line (geometry)LengthAlgorithmDivision (mathematics)MathematicsoutputMultiplication signApproximation1 (number)CurveSource code
Dot productCASE <Informatik>Web pageAlgorithmProcess (computing)Point (geometry)QuicksortApproximationsalgorithmusSubject indexingKnapsack problemGraph (mathematics)Social classMereologyMultiplication signCodeResultantForm (programming)IntegerPattern languageFigurate numberCircleNumberSet (mathematics)Lecture/Conference
Coma BerenicesXML
Transcript: English(auto-generated)
Let's get started.
How's everybody doing? It's very bright on stage. I can't see any of you. I am going to have a lot of time left for questions at the end, so I do encourage everyone to take notes and ask a question if you have one. I'm Adam Forsythe. I'm a software engineer and community lead at Braintree, and this talk is called Ruby
for Home Ec. After the talk, I'll be posting these slides, including my speaker notes, on GitHub. Let's talk about what Home Ec is for a second. This is not my picture for Home Ec. This is Wikipedia's picture for Home Ec. It's a little bit of a stereotype, the guest 1950s kitchen scene.
Home Ec is defined as the profession and field of study that deals with the management of the home and community. I'm going to walk through two problems that show you how to do things around the house more effectively and hopefully remind you that programming can help you reach a better
solution to problems you run into in your everyday life. This is Jonathan. He's my coworker. In addition to being a software engineer, he's also a farmer and a do-it-yourselfer. He inspired this talk. Both of the examples I'm going to give today are based on real problems he had around the
house and around the farm last fall. The first problem I'm going to talk about came up when he was trying to make this countertop. This is the real countertop. This is not a staged picture. Butcher block countertops like this are made up of many small pieces of wood laid
alongside each other. Hopefully you can see that in the picture. This countertop in particular is L-shaped and joined along a diagonal. What that means is that every single piece of the countertop is a different length. Jonathan's problem was how do I cut pieces of all the right lengths out of this scrap
wood that I already have most efficiently without waste. We're going to call the pieces that we start with boards and the pieces we want to end up with cuts. In this example, we have a seven-foot and a five-foot piece of wood boards, and we have
two four-foot cuts, a three-foot cut, and a one-foot cut that we want to end up with. In the middle there, you can see if we make the three-foot cut first, then we only have space to make one of the two four-foot cuts, and we end up with one cut left over that we can't make. If you make the four-foot cuts first and organize them right, you're able to make all
the cuts you need out of those two boards. Let's figure out an algorithm for how to do this, because in reality, he had a lot more cuts to make than just four. The goal is to find places for the biggest cuts first, since those are the hardest to
find places to make them. Let's start from the biggest one and go through the boards, trying to fit each cut into the boards we have. You see we do one of the four-foot cuts, another four-foot cut, then the three-foot cut, and that luckily leaves enough left for the one-foot cut. Let's look at the countertop again.
Like I said, instead of four cuts, there are actually 64 cuts to be made here. Rather than two boards, Jonathan has 35 boards, and they're of three different lengths. We'll get back to why the number of different lengths is important later. One thing that I haven't mentioned so far is that actually every single time you make a cut, you lose some length, because that saw blade has a width, and that's called
the kerf. We need to account for it every single time we make a cut. These details, the number of boards, the number of lengths, and the kerf are why we really don't want to solve this by hand. We really want some code to do it for us. This is the code.
It's a little more than 20 lines. Don't try and digest it all here. We're going to walk through it a couple lines at a time, but even then, don't worry about getting every little bit. You can always come back and watch the video again or play with the code on GitHub later. We're going to call this method greedy approximation, and that's because we want
to cut as much as possible first. We're greedy with our cuts. Remember, we made those four-foot cuts first before we made the shorter ones. This method takes past in a length of cuts we want, a length of boards that we have, and how much we lose at each cut, which, again, is called the kerf.
We sort the cuts in reverse, because we want the longest ones first, and then we add some metadata here. For the cuts, we add whether or not we found a place to make that particular cut. For the boards, we add how much of the board we've used so far and what the specific
cuts we were able to make out of each board. That's really the output of this method, because at the end, we want a list of what cuts to make from what boards so that we can go to our saw and actually do it. We're going to go through the cuts. Again, that's from longest to shortest. We've already sorted them. For each cut, we try and cut it from each board. After that, we calculate the length needed to make the cut.
If it's your very first cut, you don't need to account for the kerf yet, because no cuts have already been made. For every cut after that one, you do. If the amount of length used from the board so far is greater than zero, we have to add that kerf to our length.
Now, if there's enough length left unused on the board, then we want to make the cut. We want to record that we found a place to make that cut. We want to add that length, including the kerf if necessary,
to the amount we've used from the board. We want to add the specific cut length to the length of cuts we made from this board. Again, we can go to our saw later and do it. If we did find a place to make the cut, we want to break out of the inner loop. We don't need to keep looking at more boards to find a place to make this cut. We found a place, let's go on to the next cut and skip the rest of the boards for this cut.
You may have noticed that I actually skipped a little detail here. We sorted the cuts, but we didn't actually sort the boards. What that means is that the order the board started out in is the order that we went through them in. That order actually matters. This problem is a great example of that.
If you sort the boards from smallest to largest for our real input set here, you actually get in up with 17 cuts you can't find a place to make. But if you sort them from the largest board to the smallest board, you can make every cut but one. That makes a huge difference. That particular ordering isn't inherent to this problem.
It's just a quirk of the specific lengths we happen to have in our input set. Remember here, we're solving a real-world problem. It doesn't have to be perfect. This is actually a good enough solution. It's okay for us to make one of the pieces of the countertop from two smaller cuts. Like I said, there's one cut we couldn't make as we wanted to.
Actually though, we can fit all the cuts out of these boards. It is possible. Now remember back to the beginning, there's only three different board lengths here. So it's actually really easy to try all the different orderings of board lengths that are possible.
So we already tried shortest, middle, longest, but 17 cuts couldn't be made. Then we tried longest, middle, shortest, but one cut still couldn't be made. It turns out though, if you do the shortest ones and then the longest ones and then the middle ones, you can actually find a combination that lets you make every single cut. So sometimes a little manual work, a little bit of taking advantage of the specific structure of your problem can help a lot.
So let's talk about theory for a second. There's a classic computer science problem that this is a version of. Some of you may know this already, but a lot of you may not have. I never took an algorithms class in college, so this is the kind of thing I certainly learned on my own.
And this problem is called the knapsack problem. You've got a bag and it can hold a certain weight and you want to figure out what items you can put in the bag that have the most value. So you can see in this diagram that our bag can fit 15 kilograms and we have five different items with different weights and value combination.
So you need to get as much value packed in there as possible without exceeding that 15 kilogram limit. And how you pick those items best is the knapsack problem. So if you wanted to apply our particular algorithm to that problem, we only use one number to evaluate each board here, right?
We just use the length. So what you would do is you'd encode that value and that weight as a ratio. And then you'd apply the greedy approximation algorithm to that ratio. And it would give you a result that in the end is at least like 50% efficient.
It's not always the best way, but it is a fast and easy way to approximate a knapsack problem and often it will give you a good enough solution. So there's lots of different versions of the knapsack problem. Another well-known one is the subset sum problem. So like with our boards in the subset sum problem, the value and the weight are actually the same.
But instead of trying to just fit everything in, you have to exactly hit your target amount. So in this comic strip, you've got a group of people who want exactly $15.05 of appetizers. And the waiter has to figure out how to assemble food items to hit exactly that price, and that's the subset sum problem.
And actually, I believe there's a typo or mistake in this comic, and there are actually two different ways to hit that total. So if you take a picture or look this up online later, try and figure out what those two are. One of them is a silly case that you wouldn't actually want, though.
Yeah, so on the other hand, if you have multiple knapsacks, but again your value and weight are the same, it's called the stock cutting problem. And you can see how people normally think about that here. You've got a two-dimensional piece of paper or sheet metal or something like that, and you want to cut a number of smaller pieces out of it.
And it's often thought of in two dimensions, but it doesn't have to be. It can be one-dimensional, like in our board problem, which is an instance of the cutting stock problem, or it can be two-dimensional, but this is how it's often solved in industry, and it's often solved using proprietary software tools,
because it is so important to get as much efficiency as possible. There are also some interesting variations on this problem. One is called the guillotine stock cutting problem, where all the cuts have to go all the way across the piece of material. You can't do like you've done in this picture and cut only partway across.
And so that adds some additional constraints that makes it harder to solve, but makes the machines that you use to do it much simpler. So why is this all important to us as software developers? So this is the exact same picture I showed earlier, except that now I've labeled them servers and VMs instead of boards and cuts.
Imagine you have a bunch of physical servers and they have certain amounts of RAM, and you have a bunch of VMs that you want to run on them, and each VM requires a different amount of RAM to run its tasks. This also doesn't have to be VMs, it could be Docker containers or anything else. If you don't efficiently allocate those VMs to servers, you're going to end up with wasted server capacity in VMs that you just can't run.
Whereas if you allocate them efficiently, according to an algorithm like the one we used for cutting boards, you actually end up with an efficient allocation and you can run more VMs on the same amount of hardware. It's really just the stock cutting problem again, but this example is only one-dimensional.
With servers, it's really many-dimensional. You may have CPU, RAM, hard disk space requirements, input-output speed requirements, lots of stuff like that. So if you're interested in learning more about these kind of problems, I know it's a little trite, but Wikipedia is a great place to start.
It's accurate, it's pretty easy to follow, lots of links to the many different subtypes of this problem. So that's a good place to start, I encourage you to go there. So that's our first example. The second problem that Jonathan had involved his pantry. And unfortunately, unlike the countertop picture, this is not actually a picture of his pantry.
But like reality, there's a lot of different kinds of jars in this picture, and there's different amounts of different kinds of jars. And this is really the state that Jonathan is in. Remember, he's a farmer. So he actually preserves all kinds of different things, and then his family eats them all winter and spring until they start getting fresh produce again.
So because he's eating these same things that he makes for months and months at a time, he really wants to sort them so that he gets maximum variety throughout the winter. So it seems like a simple thing, but actually it's a little complicated. So let's talk about why. If you have about the same number of each type of jar, it's actually really simple.
If you just have three red jars, three blue jars, and three green jars, you just sort of space them all out evenly, and you've got a pretty good amount of variety through the winter. But what if you have way more of one kind of jar than the other?
If you try and do the same thing naively, you're going to end up with several of your most common jar, in this case the red jar, right next to each other. So now your kids are going to be yelling at you, oh, I have to eat pickles again today, we just had pickles yesterday, and that's not a good thing when you have five children. So we really want to space these most common jars as evenly far apart as possible through the winter.
And by evenly spacing red, you can see that no color here is twice in a row, and blue isn't sorted perfectly, but it's still pretty evenly spaced out, and so you never have to eat it twice in a row.
So we need a solution that when multiple jars, in order to be spaced out evenly, would need to be in the same place. So in this picture, to evenly space out blue, one of the blue jars would have to be right in the middle under the arrow. To evenly space out red, you also need to put one of them right in the middle under the arrow. So when that kind of conflict comes up, we need a solution that's going to put the red jar,
the most common jar, in that spot, and giving that place to the most common jar type. So my first thought was to just put all the most common jars on the shelf, equally spaced apart, and then insert the next most common jar type in the spaces between them,
and then take the least common ones and stick them in the remaining spaces. The problem with that is that you're going to end up with your least common ones bunched up in the spaces that you happen to have left over, and you don't really end up with an ideal assortment, even though the most common ones are pretty good.
So how do we avoid this? If instead of putting the jars where they need to go, and then filling in the gaps, what if we put the jars on the shelf and then space them out more and more as we go along? If we start with the most common jars, the problem is that we have to move them
exactly the same amount every single jar, or else the gaps between them are going to be uneven and you're still going to end up, like at the bottom here, with the most common jar not equally spaced. But if we reverse the process and start with the least common type of jar, and then the next most common, and then the most common,
the most common are always going to be equally spaced because we did them last, so we can put them exactly where we want them. Whereas the least most common, they're not going to be perfect. As you can see here, the green ones are sort of biased to the right. But it still gets them pretty closely evenly spread.
You still have a lot of variety here. So let's look at the code. This code is even shorter than in the previous problem. The only issue here was not writing the code, but figuring out what code to write. So let's walk through it. And again, don't worry if you don't get every little bit. There's a little bit of tricky math in here.
But you can always grab this from GitHub later and play with it, or go back and watch this video again. So first of all, here's our actual input of Jonathan's pantry. Sort is a simple dictionary mapping names to numbers of jars. So you've got your tomatillo caper soup, three jars, and spicy marinara, five jars.
And then if you look down right near the bottom, you see he's got 20 jars of salsa verde. And so you can see that we have this occasion where one type is way more common than some of the others. We only have two jars of creole sauce. So the first thing we do is we turn the dictionary into an array sorted by the number of jars.
So this is so that we have the least common jars first, and the most common jars last. And we're also going to create an empty array to represent our shelf. There's no jars on the shelf yet. We're going to put them on most common to least common. So it starts out empty. So the next thing we do is we go through each different type of jar, starting again from the least common.
And we want to figure out how many jars we're going to have between each jar we add to the shelf. So if we're putting ten jars on the shelf, those ten jars are going to have nine gaps between them. There's always going to be one less gap than there are a number of jars. And then we also look at how many jars are currently on the shelf.
It's going to start out at zero, but gradually go up as we add more and more. And the spacing we're going to want is that number of jars divided by the number of gaps. So when we first start and jars on shelf is zero, we're always going to put the jars right next to each other,
because there's nothing already there to separate them. But as more and more jars are on the shelf, the remaining jars are going to get put on farther and farther apart as that jars on shelf number that's on the top of your division gets bigger. So next we put the jars on the shelf, so number times, if we have ten jars we're going to do this ten times,
and we figure out the position on the shelf we want them, and we take our spacing and multiply it by the number of the jars. So the first jar we put one spacing amount over, the next jar we put two spacings amount over, etc. And we round that to be an integer, since remember we got our spacing from doing some division,
so it's going to be a fraction a lot of the time. And then we add the number of jars we've gone through so far to that total. Now the reason we're doing that is because remember we're talking about an array here. So when we stick an item at the beginning of the array, all of the other items to its right are going to have their index moved up by one.
So as we insert jars from the beginning towards the end, we have to account for that. So that's what that addition is doing right there. And it's the same thing if you were counting jars on a shelf. If you insert one at the left, you're not going to now insert one after the second jar, you're going to insert after the third because you've just incremented the index of everything.
Yeah. So you might have noticed I left something out again here, sort of like I did in the previous problem. In Ruby, when you divide two integers, both the jars on the shelf and the gap are going to be integers, you're going to get an integer result. So when you divide five by two, you get two, not 2.5.
So later, I multiply this number. And 2 times 10, 20 is very different than 2.5 times 10, which is 25. So I'm going to get rounding errors here if I do that normal division. So how am I actually getting accurate spacing? And the answer is that Ruby includes a library called mathn that changes how a bunch of the math functions work.
And if we require mathn here, all of our division is magically going to keep enough information that when you do the multiplication, you get the right result. So you don't lose any accuracy due to multiplying a rounding error,
and you get accurate spacings like we want. So again, this library does a lot of other things, it's kind of interesting, but this is the only part we're using right here. So this solution works really well for a small number of jars. But when you insert an item into an array, it actually takes time proportional to the length of the array.
So if instead of 100 jars, we had 100,000 or 10,000,000, this would be really slow. You basically get time proportional to the square of the length of the array, the square of the number of items that you have in your pantry. Now you could use a linked list to make inserts past, but because this is only 100 or so items, it's not important here.
So let's step back for a bit and think about this type of problem in general. We've got some criteria that we're trying to optimize, in this case jar spacing, even jar spacing, so that we don't get tired of eating any one thing. And we care more about the more common jars, which leads us to the approximation we used. But our approximation is really very specific.
We can't use it to solve other kinds of optimization problems. It only works for sorting jars or very, very similar problems. But there are approximation techniques that generalize well to many types of optimization problem. And I'm going to talk about just one of them here. So my example is going to be simulated annealing.
So annealing, which you can see actually happening in this picture, is a material such as metal or glass is heated up and then cooled back down. Physically that's all that's happening, but it's heated up and cooled back down to very specific temperature points and very specific rates of heating and cooling such that when it cools back down, its structure becomes more uniform
and it ends up at a net lower energy level so that it's much easier to work with. It's not going to be as brittle, things like that. So keeping in mind that definition of annealing, simulated annealing is exactly what it sounds like.
You literally simulate the uniformity or energy level of a structure. And simulate the process that metal goes through as it cools so that you end up at a structure that is low energy, that is easy to work with, malleable or optimized for what you want to do with it. So let's look at this Wikipedia animation for a second.
On the y-axis is uniformity. So that top point where the red line is right now is the most uniform possible structure of the underlying material, the lowest energy state. And all the possible structures are strung out across the y-axis. So as you can see, when the temperature is high, the red line moves around a lot.
The metal is jumping around or glass or whatever to many different structures. But as the temperature drops, it can't change structure so rapidly because it doesn't have as much energy. So it tends to stick in lower energy states, more uniform structures. And we gradually approach more optimal energy levels.
So you start from 20 degrees and as you get down to zero, you gradually reach what on this graph is the global optimum. And what's great about this method is that a lot of approximation methods, you're going to just find a local maxima or minima. And because you jump around so much in this one, you tend to find a better optimal point than you would using a lot of other approximation methods
that don't jump around between possible solutions. So if we think back to our problem again, we want our jars to be spaced out as uniformly as possible. And that's exactly what annealing does. It makes the structure of a material more uniform by finding lower energy states.
So if you treat the jars as having more energy the closer they are together, then if you run some simulated annealing on it, those jars are going to tend to move farther apart to dissipate that energy. And those are going to be more stable states and so more likely to be found at the end of the algorithm. One place that you might use this in computing is if you were designing a circuit board.
You have a couple criteria you want to optimize. You want to minimize the shape of the board. You may even have hard constraints on the shape and size of the board. And you want also the wires on the board to be as short as possible. So you could define a criteria that combines those two things
and then use simulated annealing to optimize it. So this is something that is really used in hardware design. But recently I saw a really cool software example of this on Hacker News or something. I encourage you all to check it out online. It's a cool project using all CC and public domain images.
It's a JavaScript app that unshreds an image. So it shuffles up all the columns of an image randomly and then it uses simulated annealing to unshuffle them. And yeah, please do check it out after this talk. A lot of it is actually really beautiful and it's fun to play with the number of iterations
and the starting temperature to see sort of how those affect the annealing process. So rather than go into too much detail about it, I'm going to show you an animation. You can see it shuffled the columns and now they're jumping around a lot, but slim strips of the image are slowly forming because the columns that are similar are sticking together. And as time goes on and more and more iterations are run of the annealing process
and the temperature gets lower and lower, we're moving around less, but we're also getting wider and wider strips of the original image. So this is a really cool example of how someone doing image processing or other computing tasks to collate data might want to use something like simulated annealing.
There are so many columns of data here, there's no way you could brute force every possible combination. But using an approximation technique like this can let you get a really good result anyway. I mean, any one of us could take the output of this and take those wide strips that are like, there's only, I don't know, 20 of them left at the end of this and reorder them into the correct order using just our brains.
So this really gets you the part that you can't do manually done. I think it's pretty cool. So I know, again, I said this already, but a great place to start learning about simulated annealing is Wikipedia. The specifics of how to apply it to different types of problems are addressed shallowly but pretty well here.
It really is a good place to start. So hopefully these examples reminded you that programming can be used to solve everyday problems like designing something in your kitchen or optimizing your food intake. And hopefully you've learned a little bit about thinking through a problem and coming up with an algorithm to solve it.
That's something that some of us do regularly, but a lot of us don't. And also see how these everyday problems and their solutions relate back to important theoretical problems and approximation methods from computer science and make it so that all of us can go out and learn more about these things and use them more in our daily work.
So I'm going to switch to code for a minute and actually run these programs and show you their output so that you can see that it's all real. Oops, one second.
My terminal window does not want to move. Cool. All right, so here's our first problem.
We can look on the left and see that we have our regular method here that we talked about earlier, greedy approximation, and then we have a main method that just calls it with our specific data set. So we have our 64 cuts that we could make that we find by two pretty simple algorithms for 32 of each.
We have all of our boards sorted by the shortest and then the longest and then the middle so that we can make every cut. We have our curve value set. So if we then run this application, we can see that we get a length of all the boards and where to cut them all from, and there are no remaining cuts.
But if we go back and we change the order of these so that we have the longest and then middle and then shortest, you can see that there is one cut that we were unable to make,
one remaining. That is one of the shorter ones that we needed. So it shows you that order matters, and the solution is very fast for a small input set like this. Cool. So let's look at our other problem here. You can see we have our actual input set.
The line that's unfortunately off the top of the screen partway is including mathn so that we get the kind of division we want. We sort our pantry, we run our algorithm, and we get out a list of all the jars in the order we should put them on our shelf. And you can see that salsa verde at the very bottom,
and also it's at the very top, off the top of the screen, so that it's spaced out as maximally as possible. So that's all I have. Slides and speaker notes will be on my GitHub at github.com slash A-G-F-O-R, and I have some time for questions.
Thanks a lot. Anybody? Yeah, how do I come up with a mathematical algorithm when I run into a problem like this at home? I tried to sort of show you what my actual thought processes were here.
With the jars, I definitely actually like drew, I got graph paper out and like drew circles and tried to like figure out what worked and then turned that into an algorithm. And that really, like, the steps of figuring out how to tweak the math so that the rounding works in your favor instead of against you, when you do end up having to convert to an integer index
from a floating point number, was a little tricky. But other than that, I really did like go from dots on a page to exactly implementing that in code. For the other problem, I had an advantage, which is that I recognized it as a form of knapsack problem. And so then I could Google and see what's the like simplest approximation algorithm for the knapsack problem,
which I had done before but couldn't remember. So I think that like part of this is doing it a bunch of times and starting to recognize patterns, which is where someone who maybe had taken an algorithms class would have a big advantage if it was good enough to sort of allow you to relate it back to the physical world. It is a creative process.
I'm not necessarily great at explaining it, but I think that like, for me, pen and paper is always the best way to start. Start from a really small set. Probably never start. If you've got to sort ten things, never try and just sort two. It's often a degenerate case that's not going to give you a great example. Start with three things, run through the same process, then four, and look for the pattern that gets you the best result in both three and four,
and then extrapolate that to larger numbers. That's about the best I can say for something that is discrete like these problems. Any other questions? Okay, great. I'm glad to answer some afterwards. Thank you very much.