We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Overcoming Cognitive Bias

00:00

Formal Metadata

Title
Overcoming Cognitive Bias
Title of Series
Number of Parts
160
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Overcoming Cognitive Bias [EuroPython 2017 - Talk - 2017-07-14 - Anfiteatro 2] [Rimini, Italy] Starting with a brief description of how built-in mechanisms in our brains lead to cognitive bias, the talk will address how a variety of cognitive biases manifest in the Python and tech communities, and how to overcome them
95
Thumbnail
1:04:08
102
119
Thumbnail
1:00:51
CognitionSoftwareIntelHeuristicPattern matchingProgrammer (hardware)Content (media)Pattern matchingPattern languageTotal S.A.Computer programmingStructural loadHeuristicSpeech synthesisProgrammer (hardware)Cross-correlationConcentricNeuroinformatikBabbage, CharlesBitGroup actionLink (knot theory)Different (Kate Ryan album)KognitionswissenschaftNumberComputer-assisted translationCalculationCausalityBuildingComputer animationLecture/Conference
Group actionSatelliteComputerOrbitFormal verificationPersonal computerTuring testCompilerWritingComputerMikrocomputerRoyal NavyPlanck constantGravitationWaveUniverse (mathematics)Directed setLaserComputer programmingLibrary (computing)Formal languageComputerNeuroinformatikComputer programmingLaserParallel computingOperating systemForm (programming)Civil engineeringSoftware bugProjective planeRight angleTuring testComputer hardwareDigital electronicsProgrammer (hardware)MikrocomputerGravitational waveMathematicsBookmark (World Wide Web)ARPANETVideo gameGraph coloringLink (knot theory)Library (computing)Internet forumFigurate numberPrime idealMultiplication signFault-tolerant systemProgramming languageMereology
MathematicsWeightPattern languageLengthProgrammer (hardware)CircleDivisorComputer programmingMultiplication sign
Computer-assisted translationCoefficient of determinationUltraviolet photoelectron spectroscopyLevel (video gaming)Pattern languageProgrammer (hardware)Computer animationDrawingLecture/Conference
Pattern languageHill differential equationPattern matchingPattern languageProgrammer (hardware)Different (Kate Ryan album)Computing platformHash functionFitness function
View (database)Local GroupEuler anglesCognitionInformationBlind spot (vehicle)Observational studyProgrammer (hardware)CognitionWave packetSelectivity (electronic)Product (business)Spline (mathematics)TheoremGroup actionPlastikkarteSocial class
PlastikkarteElectronic meeting systemPlastikkarteSoftware testingRule of inferenceNumberForcing (mathematics)Lecture/Conference
outputCodePersonal digital assistantSoftware testingFaculty (division)Rule of inferenceSoftware testingComputer programmingNP-hardCASE <Informatik>Right angleObservational studyUniverse (mathematics)Transformation (genetics)ResonatorProgrammer (hardware)Multiplication signLecture/ConferenceJSON
Element (mathematics)Graph coloringNumberPattern languagePosition operatorProof theoryLevel (video gaming)Vulnerability (computing)Endliche ModelltheorieFitness functionMatching (graph theory)Programmer (hardware)Metropolitan area networkError messageLecture/Conference
Pattern languageProof theoryError messageGroup actionoutputGenderLocal GroupElement (mathematics)Focus (optics)BootingDegree (graph theory)Order (biology)Group actionAutomatic differentiationUniverse (mathematics)Bounded variationGenderoutputPattern languageCognitionRange (statistics)Self-organizationProgrammer (hardware)Process (computing)Endliche ModelltheorieMarginal distributionFocus (optics)Element (mathematics)Context awarenessMereologyRight angleFitness functionLevel (video gaming)Goodness of fitMobile appJSON
Form (programming)PlastikkarteConvex hullComputer programmingProgrammer (hardware)Group actionPlastikkarteProgrammer (hardware)Computer programmingLecture/ConferenceComputer animation
Computer programmingProgrammer (hardware)RobotLaserPhysicsLimit (category theory)Machine visionComputer programmingDifferent (Kate Ryan album)Multiplication signCognition
Digital photographyMultiplication signGenderVector potentialLecture/Conference
GenderProcess (computing)Decision theoryLevel (video gaming)Variety (linguistics)Computer programmingMereologyCellular automatonPerformance appraisalObservational studyPresentation of a groupLecture/Conference
Instance (computer science)Type theoryCore dumpSoftware developerData managementEndliche ModelltheorieMultiplication signProgrammer (hardware)DistanceGraph coloringLibrary (computing)Projective planeComputer programmingFault-tolerant systemNeuroinformatikPeer-to-peerYouTubeLecture/Conference
Multiplication signServer (computing)BitLecture/Conference
Presentation of a groupFault-tolerant systemNeuroinformatikPoint (geometry)Lecture/Conference
Cross-correlationObservational studyCognitionBlind spot (vehicle)Lecture/Conference
Blind spot (vehicle)MereologyObservational studyCognitionNewton, IsaacWordLecture/Conference
NumberMultiplication signSystem callNormal (geometry)CodecBootingFraction (mathematics)Universe (mathematics)Lecture/Conference
Metropolitan area networkDegree (graph theory)Universe (mathematics)Process (computing)BitPolar coordinate systemNP-hardComputer programmingGodMusical ensembleBootingComputer scienceLecture/Conference
MereologyVirtual machineLecture/Conference
Source codePrisoner's dilemmaWave packetVirtual machineResultantMachine learningVideoconferencingLevel (video gaming)Row (database)Lecture/Conference
Computer programmingResultantDecision theoryEndliche ModelltheorieDifferent (Kate Ryan album)2 (number)Software bugPerspective (visual)Expected valueOpen sourceSoftware testingMultiplication signLecture/Conference
Water vaporGraph coloringProgrammer (hardware)MereologyValidity (statistics)GenderLecture/Conference
GenderSelectivity (electronic)Disk read-and-write headMultiplication signFocus (optics)Level (video gaming)Software testingEndliche ModelltheoriePoint (geometry)Lecture/Conference
Lecture/Conference
Transcript: English(auto-generated)
Thank you everyone for showing up for the last session of the conference. It's been an awesome conference. I'm Anna Martelly-Ravenscroft, and I am a Pythonista.
I'm a PSF fellow. I am not a programmer. I use Python to get stuff done, and I studied cognitive science at Stanford. Let me move that over there so I can actually see. So I want to talk today a bit about our brains.
Our brains are very clever. They're very good at doing things, but they're also very lazy. They want to reduce the cognitive load. And so if you've seen any of my previous talks, our brains do a lot of little tricks using heuristics and things to reduce how much work they have to do.
This can be awesome, and it can cause some problems. So our brains use heuristics. They do automatic pattern matching, and they fill in the blanks, which is great except when it's a problem.
So for example, lots of fluffy white puppies. They're so cute, and they set up a pattern in your brain so that when you see a brick wall with this fluffy white animal behind, your brain can fill in the blanks and say, oh, I recognize that. That's a fluffy white puppy.
Except sometimes it's a cat. So we have these gotchas, these pitfalls. We see patterns where they don't exist. Am I too close to this?
There, let's try that. Is that better? Okay, we see patterns where they don't exist. That's where superstitions come from. We have faulty reasoning. This is why we keep having to remind people that correlation does not equal causation. We do stereotyping. We build patterns about groups of people and then apply those patterns to
individuals. And sometimes those assumptions based on those patterns are faulty. So speaking of patterns, let's look at some computers and programmers, because we're here at a programming conference.
This is Ada Lovelace. She wrote the first computer program. She was working with Charles Babbage on his difference engine, and she wrote a program using his computer to calculate Bernoulli numbers.
Moving forward 50 years, how many computers do you see in this room? I've circled them. Human computers were the thing back when these were women were working at the observatory,
helping to do computations for astronomers. Even in World War II, women were human computers doing all the computations. And then by the end of World War II, they were building electronic computers.
Now, how many programmers do you see in this picture? Women were the first programmers. They took the hardware diagrams, the circuit diagrams, and figured out from there how to actually program the computers.
Also in Bletchley Park, they were doing the same thing. But we still had human computers that were out there working for things like the Jet Propulsion Laboratory, and you'll notice that not all of them were white. This was before the civil rights movement in
the US, and they were still hiring women of color, because they needed human computers to do the computation, which is great. How many people have seen the movie Hidden Figures? Okay, if you haven't, it's an awesome movie. Go see it. This is the protagonist.
Katherine Johnson was a human computer at NASA, and John Glenn asked her to redo all of the computations that they had done on the electronic computer, because he didn't trust that new fangled gadget.
He wanted someone who knew how to do the mathematics before he would trust his life to it. So, she has recently been awarded a Congressional Medal of Honor, I believe, for her work. So, move forward to the 60s. Mary Ellen Wilkes wrote the operating system for the Lync, which was the first microcomputer in the home.
So, the 70s, Adele Goldberg helped develop Smalltalk. In the 70s, Elizabeth Feinler helped create Nick for the ARPANET.
Frances Allen, an IBM fellow and Turing Award winner, laid the groundwork for parallel computing. And, of course, the amazing Grace Hopper. I'm sure we've all heard of Grace Hopper, and she's the one who documented the first bug. They actually found a moth in the works.
There's now a Grace Hopper celebration of women in computing, which you see thousands of women around you who are all into programming and other forms of technical stuff. These four ladies are Italian women working on the LIGO project that helped develop,
helped to detect gravitational waves with Python. This year, we got to hear an awesome talk from Valeria, who told us how we're using Python in cosmology.
Marlena, I met at PyCon Italia. She's the founder of ZimboPy, and an awesome lady. And then, there's my friend Kay. Kay is a scientist. Kay designs and builds lasers.
Why do I bring up Kay? What do you ask someone you meet at a technical conference like PyCon? Where do you work? What's your favorite programming language? What's your favorite library? What do you do with Python? How do you use Python? What do you ask a woman that you meet at a technical conference like PyCon or EuroPython?
Are you a programmer? Are you here with your husband? Are you a recruiter? Women don't fit the pattern of
programmer that we've built in our brains. Now, my friend Kay is a scientist. If you ask her, are you a programmer? She'd say no. But, if you ask her, do you use Python? She uses Python, NumPy, Matplotlib, all the time. In fact, she could probably run circles around
most of the people in this room with those packages that she uses for her work. So, is she a Pythonista? I'd say yes. So, yes, this talk actually is about cognitive biases.
So, remember the fluffy white puppies? They're all so cute. I like fluffy white puppies and black puppies, and I like kitties, too. I have two dogs, one cat, and eight chickens. So, look around your tech conferences. Look around your work. Look around your meetups.
If they look less like this level of diversity and more like this, your brains are going to build a pattern based on that, saying this is what a programmer looks like.
Our brains pattern match automatically. So, if all the programmers you encounter at work and at conferences look like a white male, nothing wrong with white males. I'm married to one. But if all of them look like a white male, anyone who doesn't look like a white male doesn't fit the pattern.
It's how our brains work. So, if this is your pattern of what a programmer looks like, when you see a billboard like this, with a platform engineer quote, you don't expect
to see this as the picture of the engineer. In fact, there was so much pushback on this picture that she created a hashtag, I look like an engineer, because people needed to be reminded that engineers look like a lot of different things.
So, when you ask, are you a programmer of a woman at a conference? That's cognitive bias at work. And you might be objecting, I'm less biased than most people. I'm too smart to be biased.
We only care about performance. We're a meritocracy. There's something called the bias blind spot. Studies have shown that we're like less likely to be aware of our own biases than those of others.
And intelligence, because we're all really smart people, intelligence is uncorrelated with how high your bias blind spot is, or how aware you are of your own biases. And people with a high bias blind spot take less advice from others and learn less from anti-bias training.
There's a lot more cognitive biases out there. Confirmation bias is a big one. It's a tendency for people to want to prove that they're right, that their preconceptions are right. There's in-group bias, where we want to favor people who are members of our own group.
We tend to think that others will think like us, will have the same priorities. This is why we run into problems if we don't actually talk to real users when we're creating products.
Because we think, oh well, I know what they need. Maybe you do, maybe you don't. But you may simply be projecting your own beliefs onto them. There's selective perception. There's status quo bias. There's a lot of biases. These are just a few.
So confirmation bias, here's a test that they give for confirmation bias. The rule is, if the card has a vowel on one side, then it must have an even number on the other side. So what two cards do you turn over to test the rule? Obviously you're going to turn over card A,
the card with the A on it, to see if there's an even number on the other side. But which other cards are you going to turn over, the four or the seven? How many say the four? Yeah, you guys already know this. How many say the seven?
A lot of people who haven't been warned about confirmation bias will say, oh, we have to turn over the four because we want to prove the rule is right. Now as programmers, we know that if we only test to prove that our program is right and
works right with the expected input, that we're going to have problems. And so we test for edge cases. We try to break the program. And so we've learned over time, people have taught us, and we've learned the hard way sometimes, to test for
breakage, to try and disprove that it's right. Now about that performance, we only care about performance. This shows up in universities, in orchestras, in a lot of places that you would think of as being very
focused on diversity, very liberal, very wanting to be unbiased. They gave a study. They had a study for university hiring committees where they sent identical resumes out to different committees.
Some of them had women's names and some of them had male names. And they found out that resumes with women's names were judged much more critically, even by the women, than the resumes with men's names. And these are people who really want to be unbiased, but this subconsciously was happening to them, too.
An orchestra was trying to increase the diversity of its membership. They knew that women and people of color were just as talented, but they were having trouble with hiring people, and so they said, we'll do blind auditions. So they put up a screen
so that they could not see the performer when they came in, and so they could judge only on their skills. This increased the number of people of color who got given offers to join the orchestra, but not so much the women.
Why? What do women wear on their feet when they're dressed up? High heels. And musicians, of course, could hear that this person was walking in high heels, and just that was enough of a clue that this was a woman, and they judged their performance more critically.
And so they put down a carpet so that they couldn't hear the person walk in, and suddenly women were being offered positions at the same level as the men. How does this happen?
Let me sum up. When you, this is what you use to build your model of a programmer, this is doesn't fit the model. And so your brain is going to stereotype and say, this person doesn't match the pattern, and look for proof of
the pattern that you already have. And so why doesn't this person match? Why doesn't this person belong here? You're going to try and prove that. You're going to be more critical of errors and weaknesses.
You're going to give less credit to strengths and abilities. You will perceive them different. This isn't conscious. This is subconscious. This is at a perceptual level. It's how our brains work. It's not, we're bad people. It's just, this, we're humans. This is how our brains work.
So cognitive bias is getting in the way of our meritocracy. So how do we overcome it? With awareness and unconscious effort. I'm making you aware here of some of these cognitive biases, but that you still need to put in the effort to
overcome them, to short-circuit them, and to change those mental models that you've built up. Welcome and mentor new Pythonistas. Reach out to marginalized groups. Actively work to bring in marginalized programmers and Pythonistas to conferences and user groups.
You're a Python, I think has done a really good job at this. Get organizations to sponsor people who aren't like you to attend conferences, who speak at conferences.
Actively challenge the patterns you've subconsciously built up by offering your brain new and varied input. When you're hiring, avoid gendered ads. There's an app for that. To change the, I believe these are English only,
but there are apps out there that will help you look at how you've phrased your ads to make sure that you're not turning away women and other people from submitting resumes. Use blind resumes where possible. Remove the name because that's a clue to gender and
often to ethnicity. Remove universities and other clues to group membership. Do you really, really care whether this person got their skills from a CS degree or
do you really care about their skills? Whether it came from a boot camp or wherever or self-taught. We all know awesome programmers who were self-taught. Focus more on skills than culture fit. Culture fit tends to fall into the in-group bias. We're going to give
more leeway to people who are already part of our own group. Use technology like gap jumpers to do blind auditions to find out what kind of skills this person has before you bring them in to interview them.
And when you're interviewing them face-to-face, short-circuit the cognitive of bias by looking first for reasons to hire. You're already subconsciously going to be looking for reasons to weed them out because they don't fit your patterns.
So focus on reasons you want to hire this person. Actively look for strengths and abilities from these people and watch out for imposter syndrome. If your candidate is saying, oh well, that was a team effort or I got lucky.
That's probably imposter syndrome talking and they're going to downplay their accomplishments, so take that into account when you're evaluating them. At conferences like EuroPython, PyCon and your user groups and meetups
look for people who are not like you. People who don't fit the stereotype. Listen to them. Presume they're smart. Just as smart and technical as you are.
Believe them when they share their experiences. Look for things you have in common and their strengths and abilities. At a technical conference presume everyone you meet there is a programmer or does some programming. If you're at a Python conference, everyone here is a Pythonista until you're told otherwise.
Stop asking, are you a programmer? Instead ask, what kind of programming do you do? How do you use Python? Or any other question you would ask someone who you already assume is a programmer. And you may find out that that person is a lot more like you than you expected and you may learn about
niche uses of Python that you didn't know about. I've talked to people who have done all of these different things with Python, including squirrel deterrence.
Working together to overcome the stereotypes and other cognitive biases. We all get that much closer to actually achieving a true meritocracy and making the Python community and maybe the world better. Because after all, Python is going to save the world. I don't know how, but it will. And I guess I talked really fast because sometimes this takes longer.
So I'm open for questions. I timed this last night in my hotel and it took longer.
Questions? We have time for quite a few questions, so that's good. Any questions? I'll start. Just curious to hear your opinion about the gender pay gap and
the gender pay gap that women receive lower salaries than men. We all know that. Yeah. In your opinion, and I really appreciate what I just learned here, and I would like to know how do you look at it and how do you see a potential improvement in that?
So gender pay gap exists. There's been plenty of studies about it. Some of the things, there's a whole variety of things involved with it. Everything from women not negotiating the same as men do when they start a new job. Not being willing to ask for raises as much as men do.
And sometimes there is subconscious bias in the people who are making decisions about pay raises. So one of the things that you can do is to, if someone is at the same level and doing the same work,
go through and use use programming to start looking for those gaps and trying to close them. If they're getting the same level of meet exceeds or whatever evaluations, then they should be at the same level as there
other people who are working at that same level as far as pay. So not going there.
Other questions? Thank you for the presentation, first of all. And is there any way to rebuild this part in our brains or how long will it take? For instance, I got lucky in my team. We have like 40% women and they're core developers and they're really hardcore.
But I still have the stereotype that managers better and I know that stereotype. So there's, is there a way to undo this learning? So if you're talking about undoing the project, the mental models we have of what a programmer is, like I said, the best way is to keep challenging it with
meeting more people who don't fit the model and getting to know them as people. That helps a lot. Any women here, and men are also welcome, who haven't been to Grace Hopper Women in Computing. It is an amazing conference and it can really help
especially women to overcome that feeling that I don't really belong in tech because you're surrounded by 3,000 women in tech and it's great. Really helps to overcome that subconscious bias that we have as well because we're raised in the same culture and
so we need to overcome it just as much as our men peers do. So that can help a lot. And just seeking out other people, women, people of color, people who aren't like you and saying, hey, let's talk about Python or let's talk about programming or whatever and it's
really helps to change how you view things. One of the things that I ran into, it's not just color or gender, it's also disabilities, things like that.
I worked at a history of medicine library when I was younger and this woman came in and was speaking very, very slowly. And so I was feeling kind of dismissive about her as being, well, okay, so she's not so bright.
Turns out because I equate fluency of speaking with intelligence, or I did at that time. And I found out she was working on her PhD in muscular dystrophy, which she had,
which was why she was speaking so very slowly. And the woman was brilliant. I just hadn't personally encountered someone with this and challenged my own mental models of intelligence and verbal fluency. So the more you do that with whatever mental models you have, the better you are off. Now if anyone here is wondering why do we want more diversity,
I've got a talk called diversity as a dependency that you can look up on YouTube. But I'm assuming everyone here has already gotten past that part. But if you do want to review it, you're welcome to look that up.
Other questions? Yes. So in the beginning of your talk you said a couple of times that these individuals helped compute and those individuals helped create. I kept wondering why not just say that they computed and created? Is
it some kind of third person, perhaps, imposter syndrome, if such a thing exists as imposter syndrome in the third person? Okay, sorry, it's really hard to hear and hear. The question is about imposter syndrome. I
was having trouble hearing. Yeah, perhaps if there was a bit of an imposter syndrome in your presentation as well in the beginning when you described those women as those who helped create and helped compute as opposed to created and computed. Actually the women that I mentioned who helped create were co-creators.
Grace Hopper created certain things and discovered certain things, but the women I mentioned as it helped create were actually co-creators. But I
could be, I know that I suffer from imposter syndrome. So it's always good to point things out like that. Thank you. Thank you, that was extremely interesting. You said earlier on in your talk that there is no correlation between
intelligence and the propensity to fall victim to blind spots. So why doesn't our definition or our conception of intelligence include this, what seems to be quite an important cognitive ability, not to fall victim to blind spots?
The study that I'm mentioning is actually fairly recent and so people are still figuring this stuff out. But emotional intelligence and things like that, social intelligence, are different from cognitive intelligence.
That's part of it, but it's possible that we all fall into these things just because this is how our brains are made. So they're orthogonal axes. They're not connected whether you have a
bias blind spot and you're really really intelligent, they're completely uncorrelated. That's what they found in the studies. I'm just wondering why they don't want to redefine intelligence so that it is incorporated. You'd have to look at the study and see how they decided that.
Actually, I'm thinking that any redefinition of intelligence that says Isaac Newton, the greatest genius ever lived, was not intelligent would fail my definition of what's a word for. He had enormous biases. Nevertheless, he created modern science. How do you weigh those? Anyway,
sorry, my question was about anonymizing university references in curriculum. Suppose your company gets five to ten million resumes a year. You can, with great effort, asking your people to spend a lot of time interviewing, you can interview about 200,000, maybe 300,000.
As it happens, one of the criteria used to pick those 300,000 out of six millions is we'd rather hire other things equal, we'd rather hire somebody graduated from Stanford like you
than somebody whose theoretical knowledge comes from a two months course or boot camp. Would it be better to go at random?
And that's what we would be doing if we deleted, since we can only interview a tiny fraction of the huge number of resumes sent to us. I agree that it's a hard problem. And I agree that having a degree
from a certain university may mean something. But whether that's where they got their skills in programming from may or may not be relevant to how they do their job. So, for example, someone with a degree in
music or art or psychology from Stanford who went to a boot camp to learn how to program, are they better or worse than someone who got a CS degree from Iowa University? I don't know.
But you would have to look at their actual skills. And so one of the problems that you run into, as you mentioned, you saw the keynote yesterday from the woman who talked about the many paths that we follow to get to programming. Women tend to
get into programming through various paths rather than through, okay, I started programming in high school and then I go to computer science and then I go get my job. Women tend to take a more circuitous path. And so if you're going for people who got a CS degree at a
university, you're going to, just by that criteria, weed out a lot more women than men simply because of the way that women get into technology. So that's a problem that you'll have to decide how you're going to address. I
can't tell you how to address it other than to say be aware of it and think about what kinds of ways you can look to change things a little bit.
Just shout. If you haven't gone to the Despicable Machines talk yesterday,
you should see the video of it because that actually is something that they ran into is problems with machine learning. Because of the corpus that they use, they came up with results that were not
unbiased. And so we need to be aware of that as an issue that our machines, our results from our machine learning is only as good as what we put into it for training data. And a lot of the training data is going to be biased. If it's from news
sources, it's going to be biased. If it's from prison records, it's going to be biased. So being aware of that and figuring out how you're going to address it is above my pay level.
It is hard for us to recognize some of these things and so sometimes you need to have someone else look at your data who comes from a different perspective and
who may be able to see things that you don't. Having people from different perspectives really does help a lot because they are not going to be, well, it's like any kind of testing. You want someone testing your results or your programming who wasn't deeply into the problem.
Because when you're deep into the problem and busy programming or developing or whatever, you're so deep into it that you've got all of your mental model there of how this should look, what this really means. And so you have all these expectations already built in. You're not going to see the problems. You're not going to see
what's going on. You need to have the second pair of eyes out there who hasn't been so deeply involved. And, you know, I know that open source is awesome for programming because, as Eric Raymond said, with enough eyeballs all bugs are shallow.
Maybe we need more eyeballs on these programs that we use and the results that we're getting to make sure that they're not coming up with biased results before we use them to do decision making about people.
Other questions, or are we out of time? So, as you can see, I am a woman of color.
So what can I do to help other people overcome this cognitive bias? You can give talks. Seriously, women don't
submit talks enough to conferences. We self-select out and I've heard the excuses, oh, I don't know enough about such-and-such. I didn't write the book on it. Therefore, I don't know enough to give a talk on it.
Where I've seen issues where someone who, oh hey, I learned about this thing and used it once, so I can give a talk on it. Women need to be more like that sometimes. We need to be out there giving talks, technical talks, not just my psych talks,
showing that we are part of the technical community and valid members. And when someone does come up to us and say, are you a programmer? Yes, are you?
And just say yes, and I use Python in this way and that way and engage them in a conversation and help them realize that, okay, well, maybe that was not a useful question and help them understand that there are better questions out there that they could be asking. And
the more that we get to know each other as people, I think that will help a lot. Other questions? Yeah, thanks very much for the great talk. You mentioned removing names and genders from CVs to prevent selection bias before, like when you see someone's CV.
I'm wondering, a lot of the times you actually need to interview someone in person or via Skype or something. Do you have any tips, short of asking all candidates to wear a paper bag on their head and speak through a Darth Vader voice changer, how we can avoid this kind of bias when we're actually seeing them in person and asking them these questions?
Okay, so once you've gotten to the interview point, you've already put them, presumably, through some level of skills testing. So you know that they've got some level of skills. If you haven't done that, maybe use gap jumpers or something like that to help you build
ways to do testing, to make sure that you know that they do have skills before they get as far as an interview. And then, like I mentioned, you can help to short-circuit your own bias by looking for the strengths, looking actively for the abilities, for the skills, for the reasons to hire.
Because you're automatically already going to be figuring out all the reasons not to, but start focus on looking on the reasons to hire this person, on why they are a good candidate. And that will help you to kind of get beyond that hyper-criticality that we have
when we see someone who doesn't fit the model. Okay, I think that's all the time we have for questions now. So yeah, thank you. I know that was really good. Thank you.