Data and Algorthimic Bias Essay

Hello!! I need help completing this case study regarding the Chicago police department and algorithmic data/ bias. This case study is essentially analyzing Chicago police data and algorithmic bias provided below w

here

we need to use concepts that have been discussed (I have attached some concepts) to help support what we are trying to argue. Below I have provided a reading one is the data (which is a little hard to understand) and the other will help explain that data. Background materials:

This is the data!!!

SSL Dashboard

This reading below will help you understand the data!

Dumke, M. & Main, F. (2017, May 18). A look inside the watch list Chicago police fought to keep secret. Chicago SunTimes. Available at

https://chicago.suntimes.com/2017/5/18/18386116/a-look-inside-the-watch-list-chicago-police-fought-to-keep-secret

. If you are unable to view this link, try a different browser. If you still cannot access the link, you can also access the article here,

This is another reading for further clarification!!

Gilbertson, A. (2020, Aug 20). Data-informed predictive policing was heralded as less biased. Is it? The Markup. Available at

https://themarkup.org/ask-the-markup/2020/08/20/does-predictive-police-technology-contribute-to-bias

Using the concepts we have discussed in class, analyze the use of the Chicago Police Department’s Strategic Subject List. Be sure to draw on the readings assigned for class and highlight potential issues surrounding the development and use of this tool.

Below I have also attached a screenshot of the concepts that can be used and the readings to further support the algorithmic data and bias displayed in the case!

Please make sure to cite everything that is used as well as put in the references! Please only use the sources I have provided for you they should help. This paper should be in APA format, double spaced and 5 pages long

Thank you! I hope you’re having a good day!!

RACIST IN THE MACHINE:
Downloaded from http://read.dukeupress.edu/world-policy-journal/article-pdf/33/4/111/504950/0330111.pdf by UNIV ILLINOIS AT CHICAGO user on 02 January 2022
THE DISTURBING IMPLICATIONS
OF ALGORITHMIC BIAS
NYUHUHUU
MEGAN GARCIA
T
ay’s first words in March of this year were “hellooooooo world!!!” (the “o” in
“world” was a planet earth emoji for added whimsy). It was a friendly start for the
Twitter bot designed by Microsoft to engage with people aged 18 to 24. But, in a
mere 12 hours, Tay went from upbeat conversationalist to foul-mouthed, racist
Holocaust denier who said feminists “should all die and burn in hell” and that the actor “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.”
DOI: 10.1215/07402775-3813015
Vol. XXXIII, No. 4, Winter 2016 / 2017 © 2016 World Policy Institute
111
E S S A Y | RACIST IN THE MACHINE
This is not what Microsoft had in mind.
Tay’s descent into bigotry wasn’t pre-pro-
to rein in because of the often self-reinforcing
nature of machine learning.”
Algorithmic bias isn’t new. In the 1970s
algorithms when confronted with real people,
and 1980s, St. George’s Hospital Medical
it was hardly surprising. Miguel Paz, distin-
School in the United Kingdom used a com-
guished lecturer specializing in data journalism
puter program to do initial screening of ap-
and multimedia storytelling at the CUNY Grad-
plicants. The program, which mimicked the
uate School of Journalism, wrote in an email
choices admission staff had made in the past,
that Tay revealed the problem of “testing AI in
denied interviews to as many as 60 applicants
an isolated controlled environment or network
because they were women or had non-Europe-
for research purposes, versus that AI sent out
an sounding names. The code wasn’t the work
of the lab to face a real and highly complex and
of some nefarious programmer; instead, the
diverse network of people who may have other
bias was already embedded in the admissions
views and interests.”
process. The computer program exacerbated
Tay, which Microsoft hastily shut down af-
the problem and gave it a sheen of objectiv-
ter a scant 24 hours, was programmed to learn
ity. The U.K.’s Commission for Racial Equal-
from the behaviors of other Twitter users, and
ity found St. George’s Medical School guilty of
in that regard, Tay was a success. The bot’s em-
practicing racial and sexual discrimination in
brace of humanity’s worst attributes is an ex-
its admissions process in 1988.
ample of algorithmic bias—when seemingly in-
That was several lifetimes ago in the in-
nocuous programming takes on the prejudices
formation age, but naiveté about the harms of
either of its creators or the data it is fed. In the
discriminatory algorithms is even more danger-
case of Microsoft’s social media experiment, no
ous now. Algorithms are a set of instructions
one was hurt, but the side effects of uninten-
for your computer to get from Problem A to
tionally discriminatory algorithms can be dra-
Solution B, and they’re fundamental to nearly
matic and harmful.
everything we do with technology. They tell
Companies and government institutions
your computer how to compress files, how to
that use data need to pay attention to the un-
encrypt data, how to select a person to tag in a
conscious and institutional biases that seep into
photograph, or what Siri says when you ask her
their results. It doesn’t take active prejudice to
a question. When algorithms or their underly-
produce skewed results in web searches, data-
ing data have biases, the most basic functions of
driven home loan decisions, or photo-recogni-
your computer will reinforce those prejudices.
tion software. It just takes distorted data that no
The results can range from such inconsequen-
one notices and corrects for. Thus, as we begin
tial mistakes as seeing the wrong weather in an
to create artificial intelligence, we risk inserting
app to the serious error of identifying African
racism and other prejudices into the code that
Americans as more likely to commit a crime.
will make decisions for years to come. As Laura
Computer-generated bias is almost every-
Weidman Powers, founder of Code2040, which
where we look. In 2015, researchers at Carn-
brings more African Americans and Latinos into
egie Mellon used a tool called AdFisher to track
tech, told me, “We are running the risk of seed-
online ads. When the scientists simulated men
ing self-teaching AI with the discriminatory un-
and women browsing online employment
dertones of our society in ways that will be hard
sites, Google’s advertising system showed a
MEGAN GARCIA is a senior fellow focusing on cybersecurity at New America CA.
112
WORLD POLICY JOURNAL
Downloaded from http://read.dukeupress.edu/world-policy-journal/article-pdf/33/4/111/504950/0330111.pdf by UNIV ILLINOIS AT CHICAGO user on 02 January 2022
grammed, but, given the unpredictability of
ALGORITHMIC BIAS
listing for high-income jobs to men at nearly
may want to reach out to someone at the Na-
six times the rate it displayed the same ad to
tional Sexual Assault Hotline,” and the person is
women. In a massive understatement, the re-
directed to RAINN’s website.
searchers note that this is “a finding suggestive
of discrimination.”
The problems aren’t limited to sexism either. In 2015, Jacky Alcine was browsing his
Google Photos when he noticed that the app’s
face-recognition algorithm tagged him and an
ages search for “C.E.O.” produced just 11 per-
African-American friend as “gorillas.” He shared
cent women, even though 27 percent of chief
a screenshot of the tag on Twitter, which went
executives in the U.S. are women. That’s bad
viral on social media.
enough, but in 2015, when the study was done,
the first image of a woman CEO that popped
up was “CEO Barbie.” Ironically, it was an image pulled from a 2005 Onion article with the
headline “CEO Barbie Criticized For Promoting
Unrealistic Career Images.”
The consequences of these blind spots can
be grave. With people increasingly relying on
their phones for help in emergency response
situations, health researchers from Stanford
and the University of California, San Francisco,
JACKY ALCINE
tested Siri, Google Now, Cortana, and S Voice—
all smartphone personal assistants—to see if
they could adequately respond to urgent health
questions. Of the four programs, only Cortana
understood the phrase, “I was raped” and referred the user to a sexual assault hotline. None
of the programs recognized “I am being abused”
Algorithms’ learned mistakes aren’t just of-
or “I was beaten up by my husband.” In con-
fensive. More and more computers are tasked
trast, the smartphone assistants were able to
with making crucial decisions, often on the ba-
respond to “I am depressed” or “My foot hurts.”
sis of their perceived impartiality. For example,
programmed
police use algorithms to target individuals or
knowledge about health crises that predomi-
The
glaring
omission
of
populations, and banks use them to approve
nantly affect women caused a media outcry and
loans. In both instances, computer results have
prompted the American Civil Liberties Union to
been discriminatory—a reminder that learning
launch an online petition urging Apple to pro-
how to account for algorithmic bias is increas-
gram Siri to provide information about wom-
ingly important as more financial and legal de-
en’s health. Soon an Apple team began working
cisions are driven by artificial intelligence.
with the Rape Abuse and Incest National Net-
Technology companies, banks, universities,
work (RAINN) to help Siri understand similar
or anywhere else dependent on algorithms need
requests and present the right dialogue when
to form diverse teams to better anticipate prob-
asked. Now if a user asks Siri about a case of
lems. Earlier this year, members of the Rainbow
rape, Siri responds with, “If you think you may
Laboratory at Drexel University wrote a white
have experienced sexual abuse or assault, you
paper entitled, “Does Technology Have Race?”
WINTER 2016 / 2017
113
Downloaded from http://read.dukeupress.edu/world-policy-journal/article-pdf/33/4/111/504950/0330111.pdf by UNIV ILLINOIS AT CHICAGO user on 02 January 2022
In another study, researchers from the University of Washington found that a Google Im-
E S S A Y | RACIST IN THE MACHINE
In it, they argue that the logic of Black Lives
Unfortunately, there’s little evidence that
Matter should govern “technology design.” The
tech companies are diversifying staff on a larger
absence of people of color at various stages of
scale. Not a single company has publicly con-
programming and product development, they
nected cases of algorithmic bias to changes in
argue, leads to racist outcomes.
its hiring practices.
In the past two years, many technology com-
versity of thought, gender, and race spurs great-
panies have started to release their workforce
er innovation and increased financial returns,
diversity data. The openness is an about-face
and that making an effort to hire a greater va-
from their previous unwillingness to be trans-
riety of employees could dramatically decrease
parent about their employees. Diversity data
the likelihood of bias. For instance, Apple hired
came only after five companies—Apple, Applied
Jody Castor, a blind engineer, to work on acces-
Materials, Google, Oracle, and Yahoo—fought an
sibility including for VoiceOver, a feature that
earlier attempt by the San Jose Mercury News to
allows blind users to access their Apple devices
get Silicon Valley’s 15 largest companies to dis-
based on spoken descriptions.
close the demographics of their workforces. In
Many large technology companies have
2010 and then again in 2012, the five compa-
started to say publicly that they understand the
nies argued that releasing diversity data would
importance of diversity, specifically in develop-
cause them competitive harm.
ment teams, to keep algorithmic bias at bay.
In a dramatic reversal in 2014, Google re-
After Jacky Alcine publicized Google Photo tag-
leased its data and a look behind the curtain
ging him as a gorilla, Yonatan Zunger, Google’s
revealed how few minorities worked at the tech
chief social architect and head of infrastruc-
giant. In 2014, the company was 61 percent
ture for Google Assistant, tweeted that Google
white, 30 percent Asian, 3 percent Hispanic,
was quickly putting a team together to address
and 2 percent African American. After Google
the issue and noted the importance of having
decided to be transparent about its workforce
people from a range of backgrounds to head off
demographics, Pinterest, Intel, Apple, and
these kinds of problems.
others followed suit. On gender, tech compa-
In recent comments to the Office of Sci-
nies aren’t much better: Thirty-one percent of
ence and Technology Policy at the White House,
Google employees are women, and that number
Google listed diversity in the machine learning
goes down to 19 percent if you look at Google’s
community as one of its top three priorities for
tech workforce. These numbers have moved al-
the field: “Machine learning can produce ben-
most nowhere since 2014 when the data was
efits that should be broadly shared throughout
first reported.
society. Having people from a variety of perspec-
Google is not alone among tech companies in
tives, backgrounds, and experiences working on
being overwhelmingly white or Asian and male.
and developing the technology will help us to
Despite large investments in recruiting and hir-
identify potential issues.”
ing women and underrepresented minorities,
114
WORLD POLICY JOURNAL
Downloaded from http://read.dukeupress.edu/world-policy-journal/article-pdf/33/4/111/504950/0330111.pdf by UNIV ILLINOIS AT CHICAGO user on 02 January 2022
Many studies have demonstrated that di-
ALGORITHMIC BIAS
the data shows that these efforts are nudging
at Oxford University and an expert in data sci-
diversity numbers up extremely slowly. In-
ence. “The way I read it, [the GDPR] has a pri-
tel, for instance, announced in 2015 it would
ma facie prohibition against processing data
spend $300 million over three years to improve
revealing membership in special categories.”
diversity, but it will take time to make the tech
If Goodman’s reading is correct, companies operating in the EU after 2018 are go-
A few researchers aren’t waiting for that to
ing to have to create algorithms that do not
happen and are working across disciplines to
take into account special categories—what
design other strategies to reduce algorithmic
in the Unites States are called protected cat-
bias. Mortiz Hardt and Solon Barocas, of Google
egories—like race, gender, and disabilities. As
Research and Microsoft Research, respectively,
Goodman noted, the new EU regulation “sets
established FAT ML—Fairness, Accountability,
a very, very high bar for data that is ‘inten-
and Transparency in Machine Learning—an
tionally revealing’ of special conditions.”
interdisciplinary workshop whose research
What remains to be seen is how the Unit-
includes analyzing algorithmic bias in bail de-
ed States’ and European Union’s different ap-
cisions and trying to understanding how algo-
proaches to algorithmic discrimination will
rithmic bias affects journalism.
alter the behavior of large technology com-
Despite the efforts of FAT ML and others,
panies, all of which operate in both markets.
few people are equipped to hold a rigorous
This is reminiscent of a European Court of Jus-
discussion about how to ethically mine data.
tice ruling from 2014 that EU citizens have
And, considering the scope of the problem,
the right to be forgotten. The decision forced
tech companies aren’t seriously addressing
the issue either. It seems it may take a shock
from outside the tech industry to force the
issue, and new laws in the European Union
might just do the trick.
RIGHT TO EXPLANATION
Algorithmic bias is seen differently in the EU
“SOME OF THESE MODELS
ARE NOT INTELLIGIBLE TO
HUMAN BEINGS.”
than in the U.S. In April, the EU passed a new
General Data Protection Regulation (GDPR),
Google to remove links to items that are “ir-
slated to take effect in 2018. The GDPR will
relevant or no longer relevant, or excessive in
create a “right to explanation,” whereby a
relation to the purposes for which they were
user can ask why an algorithmic decision was
processed and in the light of the time that has
made about him or her. This law is a meaning-
elapsed.” As a result, Google and other search
ful departure from current American under-
engines set up different procedures inside and
standing that algorithms are proprietary and
outside Europe. Inside the EU, they received
therefore lawfully kept secret from competi-
appeals for deletions and began removing
tors or the general public.
items from their search results, but outside
While the GDPR is not explicit about dis-
the EU, there was no such option. It is likely
crimination, it does bar the use of algorithmic
that search engines will respond similarly to
profiling “on the basis of personal data which
the GDPR—developing algorithms that don’t
are by their nature particularly sensitive,” ex-
factor in special categories in the EU but do
plained Bryce Goodman, a Clarendon scholar
so outside of it.
WINTER 2016 / 2017
115
Downloaded from http://read.dukeupress.edu/world-policy-journal/article-pdf/33/4/111/504950/0330111.pdf by UNIV ILLINOIS AT CHICAGO user on 02 January 2022
pipeline better reflect the world we live in.
E S S A Y | RACIST IN THE MACHINE
PEERING INTO THE ALGORITHMIC FUTURE
It is exponentially more difficult to deter-
Legislation like Europe’s GDPR doesn’t seem
mine what is causing biased outputs in algo-
likely to pass in the U.S., but there are other
rithms that self-program. Is it the underlying
strategies for improving algorithmic transpar-
data? Or is it the code that forms the algorithm?
ency that could be effective.
Google, for instance, is moving “more and
more toward deep learning algorithms. Those
a practice in which minorities who visited the
themselves pose a real challenge because
Capital One site were directed to apply for cards
they’re not designed to be scrutable. The whole
with higher interest rates than white visitors to
point is that you’ve got layers and layers and
the site. Cynthia Dwork at Microsoft Research
layers in order for it to work,” Goodman said.
and Richard Zemel at the University of Toronto
“The challenge of opacity in the technology
advocate for a system where people who share
itself is important to recognize. When people
particular attributes are classified in a similar
call for algorithmic transparency, what does
way by a website. For example, people who have
that mean? Just looking at the code that Google
similar credit scores have to be treated fairly
is running isn’t going to be informative at all.
when they go to a bank website or apply for
Some of these models are not intelligible to hu-
a credit card. “What we advocate is sunshine
man beings.”
for the metric,” Dwork said at FAT ML in 2014,
Goodman is investigating a way forward. He
“The metric should at the very least be open
wants to create a framework that brings together
and up for discussion. There should not be se-
computer science, law, and ethics to establish
cret metrics.”
best practices for avoiding algorithmic discrimi-
Others argue for algorithmic auditing as a
nation. So far, legal and ethical scholars have
method to ensure that any bias that emerges
theorized about computer bias without having
is caught and stopped. The group that ran the
the grounding in the technology, while techni-
AdFisher experiments wants to do internal au-
cal experts often seem to operate without con-
diting to beef up companies’ ability to reduce
sidering the social and ethical impacts of their
bias. “I want to provide Google with tools that
creations. Goodman wants to bridge that chasm.
can help police advertisers’ understanding of
Through a series of meetings, Goodman is
Google’s machine learning models and inter-
trying to develop a network that draws upon
vene when [they’re] learning questionable or
a range of disciplines. He said his hope is that
discriminatory factors,” Michael Carl Tschantz,
the network would then draft guiding prin-
a member of the Carnegie Mellon research
ciples that could become best practices or
team, said.
the basis for a certification, which a company
With the rapid progression of artificial
intelligence, the rise of so-called deep learn-
could use to demonstrate its efforts to reduce
algorithmic bias.
ing algorithms has serious implications. Deep
Another approach assumes that the best
learning allows computers to adapt and alter
way forward may not be to eliminate algorith-
their own underlying code after digesting huge
mic bias in the early stages, but to find ways
amounts of data. In essence, the algorithms
for communities to police the decisions of
program themselves. As Jen-Hsun Huang, chief
computers after the fact. At Google’s ReWork
executive of the graphics processing company
Conference this year, C.J. Adams described how
Nvidia, told The Economist earlier this year, “This
Jigsaw, formerly Google Ideas, is finding ideas
is a big deal. Instead of people writing software,
in online video game communities that have
we have data writing software.”
established “tribunals” that vote on whether a
116
WORLD POLICY JOURNAL
Downloaded from http://read.dukeupress.edu/world-policy-journal/article-pdf/33/4/111/504950/0330111.pdf by UNIV ILLINOIS AT CHICAGO user on 02 January 2022
In 2010, the Wall Street Journal unearthed
ALGORITHMIC BIAS
These models of online community polic-
one such case, Riot Games, creator of the wildly
ing could become one method of attacking
popular League of Legends, made some simple
discrimination. The combination of increased
changes that had big effects. First, it created a
attention to biases inherent in some data,
group of players who vote on reported cases of
greater clarity about the properties of algo-
harassment and decide whether a player should
rithms themselves, and the use of crowd-level
be suspended. Not only have incidents of bully-
monitoring may well contribute to a more eq-
ing dramatically decreased, but players report
uitable online world.
that they previously had no idea how their online actions affected others.
Many people seem to believe that decisions made by computers are inherently
Before these procedures were put in place,
neutral, but when Tay screeched “race war
players who were banned for bad behavior
now!!!” into the Twitterverse, it should have
came back and said the same horrible things
illustrated to everyone the threat of algorith-
again and again. At the time, players weren’t
mic prejudice. Without careful consideration
told why they had been banned. Riot’s new
of the data, the code, the coders, and how we
system tells players which offense caused their
monitor what emerges from “deep learning,”
suspension. After the change, the behavior of
our technology can be just as racist, sexist,
players who returned to the game improved.
and xenophobic as we are. l
EXTENT AND NATURE OF CIRCULATION:
Average
number
of
copies
of
each
issue
Actual number of copies of a single issue published
published during the preceding twelve months;
nearest to filing date:
(A) total number of copies printed 2561; (B.1)
(A) total number of copies printed, 2602; (B.1)
paid/requested mail subscriptions, 209; (B.4)
paid/requested mail subscriptions, 250; (B.4)
Paid distribution by other classes, 1520 (C) total
Paid distribution by other classes, 1520; (C) total
paid/requested circulation,
1729; (D.1) samples,
paid/requested circulation, 1770; (D.1) samples,
complimentary, and other nonrequested copies,
complimentary, and other nonrequested copies,
0; (D.4) nonrequested copies distributed through
0; (D.4) nonrequested copies distributed through
the USPS by other classes of mail, 129; (E) total
the USPS by other classes of mail, 129; (E) total
nonrequested distribution (sum of D.1 & D.4), 129;
nonrequested distribution (sum of D.1 & D.4), 129;
(F) total distribution (sum of C & E), 1858 ; (G) copies
(F) total distribution (sum of C & E), 1899; (G) copies
not distributed (office use, leftover, unaccounted,
not distributed (office use, leftover, unaccounted,
spoiled after printing, returns from news agents),
spoiled after printing, returns from news agents),
703; (H) total (sum of F & G), 2561.
703; (H) total (sum of F & G), 2602.
WINTER 2016 / 2017
117
Downloaded from http://read.dukeupress.edu/world-policy-journal/article-pdf/33/4/111/504950/0330111.pdf by UNIV ILLINOIS AT CHICAGO user on 02 January 2022
player’s behavior violates the group’s norms. In
Some
sources of
bias
Design of algorithm
• Defined population
• Inputs
• Some data is weighted/more important than other data
• System constraints
• Contextual factors
Data
• Training data
• Substitute variables
Designers of the algorithm
• Machine bias is human bias
Uses of algorithm
• Alternative signals for protected categories
• Feedback loops

Calculate your order
Pages (275 words)
Standard price: $0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back
If you're confident that a writer didn't follow your order details, ask for a refund.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Power up Your Academic Success with the
Team of Professionals. We’ve Got Your Back.
Power up Your Study Success with Experts We’ve Got Your Back.
WeCreativez WhatsApp Support
Our customer support team is here to answer your questions. Ask us anything!
👋 Hi, how can I help?