Bestand wählen
Merken

Beyond PHP - It's not (just) about the code

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
well I'm happy to see so many people were at 12 o'clock world it's usually time fallacious anywhere on me talking about test of that's beyond PHP stuff that's a little bit outside the code and stuff that's connected to all those pretty pictures up there in a few more on let me start by
telling you I am management gone he will follow me on Twitter as my Twitter handle I'm from people from Belgium from a town not far away from here about 40 minutes we
have of the beautiful Abby in a couple of castles and
stuff like that but I start a
company about 15 years ago called keep solutions were based solvent and very close to Brussels and we do mostly speech consultancy and better than doing stuff with open source and surround 1987 and I've worked on tools like open x b compatibility p to be consistent and and Unix-like is a project that I'm working on right now and I've been doing talks are these for the last couple years the I could not talk tell you something about the company because in coming back to that so we do open source stuff mostly PHP on do training courses that we do a lot of stuff that's outside of PHP as well and so we have our own high-speed redundant network with not multiple uplinks of we use a couple of those advanced particles out and we do a lot of stuff high scalability and and that causes us to run into certain issues that will get to later on some of our customers are mostly IT and telco operators but but we do a lot of stuff that's on public-facing up so we'll websites that were quite a lot because that's not about me and the company see who you are who here's a developed who here is not a PHP developed yeah interesting I like it that means spreading the word let's get out who here uses mice throughout OK here's ever set up at my school master slave quite a few people and I expect more people with hands to be raising the next question has anyone ever set up of the site on application a separate web server and separate databases keep your hands up how many of you do know how much traffic runs between the 2 servers and the of many anymore so you can see where I'm going right we're going to talk about stuff that's a little different than what you usually see so the topic here is the stuff we take for granted orders famous last words can should work just fine also known as works fine on my laptop but now what worries find today on your laptop might not work that well in production tomorrow so that's basically what we don't talk about the but I will not be discussing the most common mistakes I guess we will make those we all know them and we've all learned from the right eye but I wanna talk about PHP and how it interacts with the larger p p to the closest but of course it all starts with our code that what's the thing that P to be interacts with most that databases so let's talk about it's on and then on quite a bit about database actually especially in the 1st part of the store and I don't want this whole thing the talk too long about you should not write queries like these can anyone tell me which still created this query the true well my favorite true knowledge and power so I I can see things you're like + interval 0 seconds what why would you add 0 seconds to something but then there there's there's there's where causes here that 2 blickets like saying where clause 15 times this is what you get when you allow some kind of tool to just run queries that you don't even know about you don't actually know what that projects is doing to your database so I don't talk too much about it it's just don't do this please I now indexes are a different matter but who here uses axis on the database create everybody who's not raising their hand please have a look at indexes and why they're so for several other hands that didn't go up but I just wanna illustrate quickly with the an example here y and x is the why knowing what Linux does is important so if for example we have this square here select ID from stock where status equals to order by quantity now my question is where do I put in the next on the stock tables in this case the 1 status and quality In intrinsic to the next so depends on the particular considered when it so in this case if it's my secure I would put a negative index on status and quantity so 1 index on both feet now what if I do this but a greater than sign that of an equal is it still the same there'll probably thinking that something to be the same but might be a trick question so is it still thousand and 1 any ideas yet but we think that the point OK so the the answer actually in this case is yes and no because it depends the it depends on the type of in this that you use if you're using a B-tree index then the answer is yes it's still the same if you using a hash index then as soon as you hit that greater than sign is cost stop using the aggregate in so in that case you need a separate annex all status and quality but then that only works on my school 5 . 6 or higher because all versions you can have 2 separate indexes in 1 query so this just illustrates that it's very important to know not just how indexes work but also how they will work in your specific case on the specific index type you using and on the specific my school version that you're using or post rescue all Oracle version that you use now we know that the nexus make databases faster so some people have this idea let's just index everything so there's a table with 10 columns so they put 10 indexes on and then they start creating aggregate and that's across the that this might not be the best idea because every single time you do in in certain update the belief basically modify a database you have to update those indexes Javier 50 annexes on table it have to update all 50 of
at the same time every time you do a select query you're going to actually have to evaluate to see which index would be the best 1 use for every select Corey to doing 10 thousand selects a 2nd going to evaluate all those indexes for each and every select that's a very good idea this is a quote by Bill common works for a precoder software and which is a fork of my to and he says relational schema design is based on data and index design is based on queries basically that means when you're ready to start designing a wrapping and you're looking at the data that you have that's when you create your table structure and at that very moment the only index that you creates should be your primary and your foreign key to create any other indexes because you don't know what you're going to query on only when you're actually writing your the writing your quarries that's when you add those indexes now we you have just the right amount of it's on the right fields so let's imagine you have a database query that takes 15 seconds how do you protect I it's that there's a couple of simple things that you can do so I'm talking mostly about my that most of these features exist for a post rescue on Oracle and Microsoft to conserve and many others on 1 of the things you can do is turn on the slow query log the either enable it in the configuration file or you enable advices saying said global so Corey equal status and what this will do is it will write queries that all that take longer than a certain sediment a set amount of time at 2 0 log files and you can also define how much time of course by default is something like 5 seconds I think which is way too long course I there's also a might to specific and function that is lock is not using a nexus so I don't have to explain what that will do I guess it will just 1 degree not using an that these are of course really interesting to look at because you might wanna add next somewhere for that query but be aware that function only works if the slow query logs on that's not very well documented but if he try to use the 2nd 1 without using first one you know see anything at all the 3rd thing you can do is you can turn on the general query log this is going to log in every single Corey running on your system do that production machine don't forget to turn it off because if you're doing 10 thousand queries per 2nd you can have 10 thousand lines of items in your log file and that can actually bring down your entire machine and they might wonder why would I want to log every single query running on my system what's what's the point is can be way too much I watch it is a nice to call pt query digest it's part of Percona Toolkit and if you run it and give it that log file that is going to give some pretty interesting out which it's gonna look a little like this now it might be hard to read at the back but basically it's just lines with inquiries so it's going to aggregate results and say 0 this square is being run 16 thousand times for example 1st 1 year 6 being run 16 thousand times and they take a sorry 12 of 12 hundred times actually takes 13 . 6 seconds poor query so that's obviously a query you wanna look at once so it will list of all these queries this is only the 1st part about that from P grade I just because in the neat that poor query you get even more stuff which completely only by the back I'm sure this will tell you like to know about this Corey spent this much time waiting for a walk I examine that rose I retrieve this much data from the database and so long so a lot of details that will help you to really debug why this queries so problematic now this is really when you wanna get down to the very did many details of specific query but is actually an easy way in most database systems policies from the limit of our engineers just with death Chan yeah so I is a function call explain in Maestro again exists in other databases in a similar way and what it will do is it'll tell you how much fuel will execute the specific quarry so you just say explained you give the query and it will output stuff like this are this is a very simple Corey and it's just a query only in police table and we can already see that there's something not right the possible keys on that table on all so basically there are no indexes as a result we are not using any keys there were retrieving 300 thousand rows using that query that's not a good idea now if we have for example a join here we have a joint between 2 tables but there are possibilities and we are actually using a primary key resulting in the retrieval of just 1 row per table this is much better explain allows you to see all that it also has a column called type and the column called extra the entire PC system const or that's all good if it's all that basically means you're treating every single role in that table get 50 million rows and there are not so good are the extra info this is using X that's obviously very good but is using files sort unless you have a very small table it's usually a bad sign that the 4 and for each 1 to talk about that a little bit about what's the problem with foreign for region database queries can anyone tell me why would you not use database queries in a lot of for each stuff the what could be the problem anyone have a look at this code I hope that's really what back with me maybe they can only handle the figure 3 groups but quickly
and that what it so retrieving
we're using some kind of law and to here and retrieving customers that are in the state of Minnesota there were looking over those in regular fetch all the contacts that are attached to their customers and awareness and stuff would context that we treat and this works fine if you have 5 customers if you have 10 thousand that could be sort of an issue because then you're doing 1 core interests we treat your customers and then you're during 10 thousand to retrieval context for each of those customers doing 10 thousand and 1 chorus whereas an easy solution would be to just write the joint across and life during 1 database query to retrieve exactly the same amount of information the that's gonna be a lot faster of course problem is a lot of people still writing those foreign for each loops cross-database queries causing massive overload on both your database server as well as a little bit more processing on speech to be sort usually that happens because as I explained it works fine for 5 users that then your company buys another company and they have a million users that and suddenly it runs into serious problems in the other case that I see happening is there's an internal applications being used by 5 people with all databases but they're just internal uses the not using a bad hardly and the manager comes along sees that applications is how are you are customers could actually make use of that let's put online now instead of 5 users you have a a thousand users using that and then those foreign preacher lots of course cause issues that's why I want to give this talk will basically this talk is sort of like the stuff that we got from projects we build suffer for customers but we sometimes also gets like pulled into projects when they call like 0 we have an issue or we need to do migration and we figure out that there's something wrong so this is sort of like 15 years of how to call still have with 2 lines of code and I to illustrate that by talking about 3 customers our 3 customer cases I'm of course not going to give their names consist on Belgian and the Belgian companies so I'm gonna be calling them x y and z so let's talk about x y and x it was a job you could go to that site and you would type I'm looking for PHP the job to type PHP you click search you get 50 jobs and then you could click on the job and you get details about the adopt and basically they were monitoring how many times certain job is viewed it would keep logs for dailies weeklies mildly it's and then they would also keep a lot of which uses all which job and they put that in in tables for tables shown today so weak show among which contain the job ID as a primary key and the number of times a job show and then they all sat table culture user which contain the job ID user ID and when it was shown to the user and this find there were just reset the shown today once a day and the show weak once a week and so on works fine no problem whatsoever now the thing is originally they were logging when someone actually clicked the job and saw the full description and and someone in the marketing department decided how we wanna change that as soon as you type PHP and you get 50 jobs we want all those 50 jobs to the other OK fine so developer we then build the code that the developer internally decide to make that change the that means if you have 50 jobs to be updated you're running 50 of these for shown today 50 should we choose 54 Shimon and those 50 answers user that's 200 queries for 1 search that might not be ideal and this is the actual code the reuse of sorry it's too small the back basically you can tell already there's of for each here so this was the original code they had and then they just put a French around through works you know on that the issue is that they were a customer not because we wrote code for them but because we get a my skill Master Slave setup for we set it up and it was running quite nicely so we got a peak for something I don't know what and all of a sudden you see that stuff is happening here but this thing was running while up to 16 hundreds inserts per 2nd of 2000 secured updates per 2nd but it was a 16 core CPU machine so it was handling it's perfectly no problem whatsoever until like 2 days in the client calls us can we got this mail from the monitoring system that you set up the my school slave is now running more than 5 minutes behind the master what is going on that we set up who could you think the blatant we said yeah OK there's a problem with I what's causing those peaks here I mean we can't change anything to the infrastructure and everything worked perfectly before so what is going on so we we said their developers did you change anything in this is not and so what are you doing that that case there talk then change a thing and all of a sudden the slave lighting so the only thing we could do is turn on that general quite a longer I talked about running through PD digest and see all of a sudden for queries at the top they ask those updates in those inserts into the shown user shown month and so and so we went back to develop this in the house maybe maybe we make this tiny little change we have the for each somewhere and so after 3 days it was running 2 and a half days behind in every hour on so and the manager said in know its fine it'll catch up during the nite I would want and as they said you know maybe you can week the database server little bits so that during the nite it will capture we so maybe it is possible to do that but what's the point of having Master Slave setup if your slave is running behind 2 hours during the day and you count users to switch to much there's no point so the big question is why is the sleeve lagging behind because the master wasn't happening in any issues so to understand that we need to look at how master-slave works and basically are master here is using 16 costs to process all the database courts and as soon as a bit of a story is ready and write it to disk it also is going to write a query to a binary lock file this can use 1 CPU score to do that and that binary log file contain nothing more than core space and then the slave is gonna copy that file using 1 CPU core the but then it has to execute each query in the same order as the mustard so the only way to do that is to do it sequentially 1 query then the next 1 then the next 1 so it can only use 1 CPU core to do that that means the masters using 16 or 15 of the the slave is stuck 1 and 1 CPU core just wasn't enough to process all those queries so while mastering look like this and they
went up I the slave look like that and you can see it it's not really going up anymore you can see different color here but it's not rising because it's just not cook able to cope with it on 1 CPU so what's the fixed how do we fix this ugly well this beautiful piece of code with a for each any ideas so we getting rid of the freelance yes but the marketing people don't like that they will the statistics the he if I have a guess getting rid of the marking people is the solution if you want yes having 1 insert with multiple entries so the doing basically this insert into shown today values and then you attach all the values and you can still have that on duplicate key at the back that's the works so instead of having to 100 queries we have only 4 which is kind of an improvement and the code to do that it's kind of dirty you create a piece of string then you for each across all of those then you cut the last come out of and then you add all the validity that's what they produced that took in 2 days and we can wait 2 days of course but so we just do this the old to commit falls then exactly the same code and then the become so now instead of running to queries in writing to disk every time we did that only once we want to and queries in 1 commits and we only update the index once instead of updating the index to types and that means it was OK the slave was catching up slowly again so for eludes a bad I don't have to mention that more but the issue is that if you add master-slave here it gets worse a lot worse than you could have an application to later works fine with the for loop in there somewhere and tomorrow your system admins as high want more redundancy and then add a slave to the system and all of a sudden the whole thing goes down using transactions will calls performance increase here but it might also introduce locking issues they have to be the case In our case the slave qualified is we are having issues in anymore more afterwards so I talked a lot about databases and there's another thing that PHP interacts a lot with it's of course the network because without the network we would have no PHP basically so I won't talk about this customer customer why of course but they have but they have 1 of the top 10 sites in those when they were growing well I would say exponentially at the time and for some reason between 8 and 10 PM in the evening they would have serious issues on the site they would lose database connectivity of certain queries would simply fail are connections were dropped are there would be enormous amounts of latency very inexplicable things would have and we didn't actually write any code for them but they call this because they were really stuck and the 1st thing we said was well do you have any monitoring systems on your infrastructure and this is well we can check if the server's running or not the and so what we did was we installed a tool called IP traffic anyone know IP traffic a few people OK so basically it's a tool that will just tell you how much and how much traffic is going across your network porch and we notice that they were sending from the database server to web server 98 the megabits per 2nd there was a number was a little bit to alter us so we look at that switch turns out it's maximum speed is 100 megabits-per-second 90 H. plus a little bit over n equals 100 so they were just hitting the network limits with whatever they were transferring from database to the web server so it'll look this is your problem a year that's all this obviously was going on and this is not great so their blood the that 100 megabits which they put in a gigabits which Mr. thank you for fixing or problems of those the invoice as of yet sure also nude that when a 2nd your site is big but it's not that big it shouldn't be doing 100 meters per 2nd or more also so we started digging a little deeper it turns out they were sending 700 gigabytes of from the database server over to switch to the web server and they were sending out 60 gigabytes today over the land something really doesn't add up here I mean all that traffic going right now upon investigation and it turns out they were using my favorite tool to about it I'm so glad we don't have to ballroom here anymore but and you've as system with books and if you build your modules into repel the wrong way then when a hook is called it you can actually attach it to Compton walks and stuff like that it can actually starch retrieving data and then at the end he decides I donated all just throw it away so they were retrieving lots of information and they're not using it'll I so the thing is here you should only low data that you are actually good use of course but if you don't know in advance what they believe that only using a certain page then use lazy loading loaded at the very 2nd that you actually need OSHA thinking thinking as a camp caching everything in Memcached erectus was basically same thing so retrieving over the network and you're still retrieving it within PHP and processing and then throwing it away on the Net right there's a lot of stuff that can happen that's just different and just overloading work and so we had this customer as columns z that I 150 thousand visits a day which for a Belgian side is pretty reasonable that's whether they had a new sticker on their main page and that you here would basically load an XML feed from a different site but now they also owns that site and they didn't want to overload the other server so they just cash that feed 15 minutes this is the actual code they were using and so we're to check the phi creation time of the cache file and if it's more of 15 minutes old then we're going to delete the cache files then they're going to fetch the feed from the certain neural put into cache file and finally we're going to pass that cash yeah can anyone tell me what's wrong with the code what's not wrong with the the Scotland and the entire and you yes it's going to downloaded the many times every 15 minutes will get back to that in a minute yes the it's doing the past thing every single time we'll get back to that and everything else good the what if it doesn't exist yet well I guess it'll it'll uh yeah necklace fuzzy time to return 0 I 1 of
it's a problem if and was that should actually work to clients can fetch the file at the same time good yeah it starts to delete the file all these are true but it's actually not the problem we encounter maybe our work the of that's what you you the know yeah still what we had we have a slightly different issue come the not transactional say at the end you you can have a race condition actually here theory what actually happened in those cases so we have our website and we had the feed location which was on a different server in a different data center and at this center lost power now what happens in this case after the cash expires it and so I of believe I will try to since but you will get all the time now what's the full time and PHP 6 years and out so what happens is each visitor is waiting 60 seconds really the weight 60 seconds refresh for fresh refresh I let the open you tap half my Firefox is saying let the open crew so you get more connections More connections at some points it will using Apache and they would just hit the at the maximum connections now it's not just that piece is found as the entire site and so course we go in and we like the the 2nd no load on the system and there's like 400 Apache processes running what the heck is going on vocal as we start Apache now that doesn't fix it it's the same thing almost instantly so it took us a little while to figure out what exactly was going on and the and then they develop a came up with this fixed so he's now creating a stream contexts of with the time out of 5 seconds and then applying that to define a confidence but yes this is a fixed I wouldn't say it's a very good text but at least the site was not 60 seconds slower again they're good for you to view all all the 1 of yeah yes we will get to that in a minute but so let's have a look a quick look at all the other stuff that was mentioned so for starters this a link here we will you whether might have when the cash expires you might have a couple of people going into this structure and 1 person might actually and link it and then fetch the contents while another person might need only right after that Billy the father was just put their which is kind of pointless so and then of course everybody goes back into that of structure so that sort of like an endless loop almost so let's get rid of that link don't believe from cash only push your updates to catch only right catch so another thing is that what is funded get contents going to return when there's a time but also so we're actually putting falls into our catch 5 which is not going to pass very well so we we might wanna check for that as well but someone mentions we're doing pass XML feed every single time why we could just do that once when we write it this will be more convenient reduce the CPU usage but there's actually 1 thing that wasn't mentioned yet or actually sort of file get contents and file put contents are not atomic operations in PHP the what that means is if you got 2 people going into this and retrieving that feed and they're doing file put content simultaneously then you can actually get a corrupted file I In the same way you could have 1 person doing if I put contents and the other 1 doing filed contents and the file is incomplete yet he get you is reading a corrupted file so plug accountants fopen contents bad idea in this case plus we are relying on the user the user is actually going to decide when the cache needs to be updated which is a bad idea because that basically puts the user in control I it might be better to run across job every 15 minutes that's just going to fetch that feet so we'll and what actually that the model in the law from this on we should be using timeouts the D for PHP time of 60 seconds is all of bit too high for most cases so when you fetch something from a certain neural and you want immediately show information to user than those with 60 seconds per time of only a floating curl so arrest anything that requires networking to and if actually you trust your data source like in this case they owned the other side that had to feed on it why not reverse the process 1 of which any news updates to you that way you don't have to go and fetch it every 15 minutes you know plus you get every single piece of news that's updated you get it right away not 15 minutes later and of course you should add log year you have to know when stuff like this goes wrong you have to receive an e-mail or some kind of SMS speaking about warning walking very good I like it i'm logging in PHP using F open as we saw not not really always a good idea because of walking issues there are alternatives which we already saw a monolog being mentioned a couple of times they are not so using monolog you can basically lot to all of those different systems and you can switch from 1 to the other without actually having to rewrite the code which is very after Firefox there's a plugin coal-fire but who uses fire it OK now there's a plugin for Firebug coal-fired peach users back not so if you but look at it it's a very interesting thing but it allows you to write directly to fire but from within your code without actually showing in the output to the user on screen I wouldn't use and production but if you really have no other choice than that's the best way to do it now warnings great that if you never open log file and there's no point in allowing of course so you have to watch your logs but equally important you have to be careful about the warping process there's a lot of system engineers that will say OK on my database server i put my databases on fast as is the disks in my log files among slow old-fashioned 5 thousand rpm disks because that's OK that actually writing to slow disk can cause a serious iable that connection bring down a machine on because of excessive rights database updates of issues with while files swapping if you don't have enough memory this probably the biggest issue by excessive freeze the same thing on the next database industries calls except for excessive risk as you read in your entire table from this usually but how did you detected well if you're on Linux who is also was running production on we're not running production a limits
1 2 3 it could endorses asleep maybe even and so if if you're in Windows check with your system administrator the sorry if you're Linux just type talk and talk will give you some very nice out but it will tell you so how much CPU usage um by user processes how much CPU is being used by system processes how much idle CPU time have but also how much I await theories in this case we see 35 . 5 % of the time of your CPU's awaiting previous to read or to write that's a lot of time then you can drill down using I stats in this case it's I await 53 % to half the time your CPU is just busy waiting for the disk i and I a set will also tell you on which device that has plenty of tools that will allow you to drill down even further up to the point where you can say OK this is actually being caused by that specific process if you see I await story about your code stop worrying about that too don't problem 1st because that's going to kill you sir so we talked about the elephant on the database indexing avoiding for oops we talked about external stuff so it's important for any developer to realize what talking to stuff that's outside of our code and it's important to know what and what we're doing there and what effect it's causing like a master slave reads and we've transferring that stuff across the network do we really need all that data do we are we are in the process of our reasoning too much data from a database server to watch so what happens if there was a time out how we handle that your code should be able to handle back as at least in a sensible way it should give a nice error message which of a large the system administrator server well of course were processing speech Peter FIL you I don't talk really about this sport but we can compressed stuff we can catch stuff on the user side so we don't always have to send it over there so we have a less peach beta process so it is not just about the elephant it's also about the gulf it's about the thing when it's about the devil that's about all those other things and if we wanna go from being just PHP coders to being real PHP engineers and we have to look and all those different things which brings us to the question any questions I think this will be on the the the of the of the of the so with the example of finding contours of could contents what would be the best alternative our solution on the best way to do it there is to use a crumb job and just let it run every 15 minutes and not rely on the user to launch that piece of code just like to ground up updated every single time you you won't even have to check for the file creation time in that case because you just go right away that would be the best solution it could be mentioned with a lot of the time on any of the yet so the question is the file covers a pocket contents are not atomic operations would be have the same issue in the Cronje of and normally not would 5 put contents because you only running from job once every 15 minutes so there's never going to be 2 of them simultaneously but theoretically you could have an issue if you file could contents without using the proper locking mode if you do you file put contents and someone's reading at the same time on your site back cost work so you have to be careful that there's a couple of uh flags that you can use to ensure that nobody can actually start reading about some things the other elements of the theory was actually because of this right 1 request could be enough if it's is at the exact right size good several the well at so there is a way to to change walking mode and then you don't have that issue but then you as soon as you start introducing walking you can have some other issues that every everything everything's basically locking I it also depends on the size of your file course if your size super small then screenwriting 1 I O operation if the file is very big then it could cause an issue so it depends on a number of factors and as a couple of ways that you can which you can mitigate the issue you know that work of the but rather than that by the time you need and you need to know that I don't the the reading I the right yeah I I don't know if the copy command is yet so we could fix the problem by writing to different file 1st and then renaming the file or copying it over the other 4 the yeah that's moved um I don't know if that is all is atomic but it should be a disaster you know that yeah I mean by using a caching system like memcached Aretaeus or any other system is a lot better in the case of course on Alice illustrating of course with the customer and they were not as advanced at the time good the what what the and the yeah so the question is is it is there a solution to the master slave issue with of the slave only processing on 1 core and the the issue with Maestro is that normally can only process on 1 core for 1 database a possible solution would be discovered up your database across multiple databases instantly Split your data across multiple databases of course you will Lewis certain abilities so that you can only have within 1 single database then it would be able to run 1 CPU core per database so that is a possible solution you could also replicate certain tables only to certain slaves if you have multiple slaves then you could split up that way as well but then you lose the ability to say OK I'm gonna use my slave as my master immediately you would have to hydrogen together OK so just in case you from Belgium we're hiring you looking for a challenge but that's just my 5 seconds of advertising here I I'll put the slides on Slideshare please provide some feedback that that goes for all the speakers today so we have the euro up here actually if you go to that neural you can also click through please provide some feedback tells because what you like speakers how they can improve on their talk that way to conclude for future conferences thank you you and we all know that you
Softwaretest
Bit
Datenmanagement
Twitter <Softwareplattform>
Code
Code
Bit
Punkt
Euler-Winkel
Skalierbarkeit
Versionsverwaltung
Kartesische Koordinaten
Metropolitan area network
Negative Zahl
OISC
Skalierbarkeit
Vorzeichen <Mathematik>
Trennschärfe <Statistik>
Code
Umwandlungsenthalpie
Automatische Indexierung
Nichtlinearer Operator
Softwareentwickler
Datennetz
Physikalischer Effekt
Datenhaltung
Programm/Quellcode
Abfrage
Biprodukt
Sinusfunktion
Diskrete-Elemente-Methode
Automatische Indexierung
Server
Projektive Ebene
Ordnung <Mathematik>
Versionsverwaltung
Faltungscode
Tabelle <Informatik>
Server
Web Site
Wellenpaket
Wort <Informatik>
Fächer <Mathematik>
EDV-Beratung
Interaktives Fernsehen
Kolmogorov-Komplexität
Sprachsynthese
Maßerweiterung
Code
Datenhaltung
Open Source
Benutzerbeteiligung
Magnetblasenspeicher
Lesezeichen <Internet>
ASCII
Notebook-Computer
Datennetz
Datentyp
Speicher <Informatik>
Leistung <Physik>
Trennungsaxiom
Open Source
Zwei
Indexberechnung
Quadratzahl
Binomialkoeffizient
Mereologie
Wort <Informatik>
Information Retrieval
Resultante
Retrievalsprache
Bit
Punkt
Momentenproblem
Leistungsbewertung
Gruppenkeim
Datenbank
Information
Login
Gradient
Digital Object Identifier
Vorzeichen <Mathematik>
Trennschärfe <Statistik>
Code
Figurierte Zahl
Default
Gerade
Funktion <Mathematik>
Umwandlungsenthalpie
Lineares Funktional
Automatische Indexierung
Datentyp
Datenhaltung
Abfrage
Systemaufruf
Biprodukt
Kontextbezogenes System
Diskrete-Elemente-Methode
Datenfeld
Menge
Automatische Indexierung
Rechter Winkel
Information
Schlüsselverwaltung
Tabelle <Informatik>
Relationale Datenbank
Code
Datenhaltung
Physikalisches System
Virtuelle Maschine
Datensatz
Software
Datentyp
Inverser Limes
Datenstruktur
Konfigurationsraum
Ganze Funktion
Zwei
Relativitätstheorie
Indexberechnung
Physikalisches System
Elektronische Publikation
Menge
Minimalgrad
Quadratzahl
Mereologie
Information Retrieval
Bit
Einfügungsdämpfung
Prozess <Physik>
Punkt
Datenbank
Kartesische Koordinaten
Gesetz <Physik>
Login
Binärcode
Raum-Zeit
Deskriptive Statistik
Client
Datenmanagement
Prozess <Informatik>
Code
Speicherabzug
E-Mail
Gerade
Kette <Mathematik>
Softwareentwickler
Physikalischer Effekt
Datenhaltung
Abfrage
Kontextbezogenes System
Gruppenoperation
Diskrete-Elemente-Methode
Ein-Ausgabe
Client
Server
Projektive Ebene
Information
Ordnung <Mathematik>
Schlüsselverwaltung
Aggregatzustand
Tabelle <Informatik>
Overloading <Informatik>
Web Site
Mathematisierung
Zahlenbereich
Sprachsynthese
E-Mail
Zentraleinheit
Code
Loop
Virtuelle Maschine
Migration <Informatik>
Mini-Disc
Konstante
Datentyp
Digital Rights Management
Softwareentwickler
Physikalischer Effekt
Videospiel
Mathematisierung
Physikalisches System
Elektronische Publikation
Quick-Sort
Loop
Softwareschwachstelle
Speicherabzug
Einfügungsdämpfung
Extrempunkt
Benutzeroberfläche
Extrempunkt
Gesetz <Physik>
Homepage
Streaming <Kommunikationstechnik>
Client
Last
Code
E-Mail
Auswahlaxiom
Caching
Datennetz
Güte der Anpassung
Kontextbezogenes System
Biprodukt
Rechter Winkel
Konditionszahl
Festspeicher
Ein-Ausgabe
Server
Client
Zeichenkette
Tabelle <Informatik>
Subtraktion
Hecke-Operator
Relationentheorie
Warping
Ordinalzahl
Datenhaltung
Loop
Virtuelle Maschine
Open Source
Informationsmodellierung
Bereichsschätzung
Dateisystem
Datennetz
Datentyp
Äußere Algebra eines Moduls
Inhalt <Mathematik>
Datenstruktur
Finite-Elemente-Methode
Plug in
Elektronische Publikation
Binder <Informatik>
Modul
Fuzzy-Logik
Chatten <Kommunikation>
Offene Menge
Caching
Gamecontroller
Kantenfärbung
Bit
Prozess <Physik>
Punkt
Kartesische Koordinaten
Login
Rechenzentrum
Prozess <Informatik>
Hook <Programmierung>
Meter
Funktion <Mathematik>
Nichtlinearer Operator
Statistik
Datenhaltung
Systemtechnik
Abfrage
Spieltheorie
Quellcode
Transaktionsverwaltung
Automatische Indexierung
Information
URL
Message-Passing
Web Site
Overloading <Informatik>
Server
Gewicht <Mathematik>
Gefrieren
Zahlenbereich
Zentraleinheit
Code
Physikalische Theorie
Physikalisches System
Benutzerbeteiligung
Multiplikation
Mini-Disc
Inverser Limes
Leistung <Physik>
Touchscreen
Einfach zusammenhängender Raum
Zwei
Systemverwaltung
Validität
Einfache Genauigkeit
Physikalisches System
Quick-Sort
Last
Mini-Disc
Prozess <Physik>
Punkt
Snake <Bildverarbeitung>
Element <Mathematik>
Prozess <Informatik>
Fahne <Mathematik>
Code
Bildschirmfenster
Caching
ATM
Nichtlinearer Operator
Datennetz
Datumsgrenze
Datenhaltung
Güte der Anpassung
Teilbarkeit
Menge
Automatische Indexierung
Rechter Winkel
Server
Tabelle <Informatik>
Lesen <Datenverarbeitung>
Fehlermeldung
Rückkopplung
Web Site
Zahlenbereich
Sprachsynthese
Zentraleinheit
Code
Physikalische Theorie
Datenhaltung
Physikalisches System
Multiplikation
Reelle Zahl
Objektorientierte Programmiersprache
Mini-Disc
Dateisystem
Datentyp
Vererbungshierarchie
Äußere Algebra eines Moduls
Inhalt <Mathematik>
Softwareentwickler
Soundverarbeitung
Betafunktion
Zwei
Systemverwaltung
Physikalisches System
Elektronische Publikation
Caching
Speicherabzug
Bitrate

Metadaten

Formale Metadaten

Titel Beyond PHP - It's not (just) about the code
Alternativer Titel Php And Friends - Beyond php
Serientitel FOSDEM 2015
Autor Godden, Wim
Lizenz CC-Namensnennung 2.0 Belgien:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
DOI 10.5446/34468
Herausgeber FOSDEM VZW
Erscheinungsjahr 2016
Sprache Englisch
Produktionsjahr 2015

Inhaltliche Metadaten

Fachgebiet Informatik

Ähnliche Filme

Loading...
Feedback