Ritz job applicant informed of afro hair ban says hotel apology ‘disingenuous’

Hotel claimed Jerelle Jules was sent out-of-date and incorrect grooming policy banning ‘unusual hairstyles’

A black job applicant who was told his hair was against the employee grooming policy of the Ritz has said an apology he subsequently received from the hotel was “disingenuous and lacklustre”.

Jerelle Jules, 30, from Hammersmith, West London, had made it to the final round of interviews for a position as a dining reservations supervisor at the exclusive London hotel, when he was sent the company’s employee grooming policy.

The policy, dated 2021, said “unusual hairstyles” such as “spiky hair, afro style” were not allowed. Jules said he was “shocked and disappointed” that his hairstyle was not suitable.

In a statement the Ritz said Jules was sent an out-of-date and incorrect grooming policyadding that it had offered an “unreserved apology for this error”. But according to Jules, the apology was “disingenuous and lacklustre”.

Jules, who works in corporate housing, said it was the first time he had been told he could not have afro hair for a job. He added that he declined the final interview, and that the grooming policy was an example of “corporate ignorance”.

“I want to make sure that things like this don’t happen again,” he said. “It’s about inclusivity and black professionalism.”

Speaking to Metro, Jules added: “The word afro itself is obviously indicative of Africans and after reading that I don’t feel comfortable going to the interview.

“The policy was updated in June 2021 so this is not something that was written 10 or 20 years ago. It’s a recent policy that shows a lack of awareness about being inclusive to people of colour.”

Jules said he had invited the Ritz to talk about diversity and being “open to all candidates”.

Speaking to the BBC, Andy Slaughter, the Labour MP for Hammersmith, said the hair policy was “blatant discrimination”.

“The response by the Ritz on being challenged is wholly inadequate,” he said. “They have not explained how this racist and demeaning policy came about or what they now intend to do to address its legacy,” Slaughter said.

“Mr Jules has offered to help them improve their recruitment process, which is a generous offer and one they should take up. There is no room for this type of attitude from employers.”

A spokesperson for the five-star hotel said: “The Ritz London does not condone discrimination of any form and we are genuinely committed to fostering an inclusive and non-discriminatory environment for all of our colleagues and guests.”

Read here

15 steps to reduce unconscious bias in the hiring process

In today’s diverse and multicultural workforce, it’s essential for companies to prioritize diversity, equity and inclusion in their hiring process. However, despite a company’s best intentions, unconscious bias can still seep into recruitment practices, leading to a lack of diversity in the workplace.

To create a truly inclusive and diverse workforce, it’s crucial to identify and try to eliminate any unconscious bias that exists in the hiring process. Below, 15 Forbes Coaches Council members share strategies companies can implement to mitigate the risk of unconscious bias harming their recruiting efforts and ensure that they’re hiring the best candidates regardless of their background

1. Get Feedback From Candidates

As humans, we must accept that unconscious bias exists. To keep it from impacting the hiring process, calibrate the process with a diverse panel of internal key stakeholders and candidates. Enable the panel with regular training and cultural exposure opportunities. And finally, incorporate a feedback loop into the candidate experience to enhance the process. - Anthony HowardHR Certified LLC

2. Compare Candidates To Responsibilities

For years, recruiters said experience enables them to hire in three seconds. The key to unbiased hiring is to prepare for a 360-degree assessment with a job description, personality tests and two interviews with two people. Always exclusively compare the candidate to the role’s responsibilities. Never compare candidates to the person who is leaving or to other candidates. There’s no better or worse; there’s a “fit” or “not a fit.” - Krumma JónsdóttirPositive Performances

3. Show Vulnerability To Encourage Self-Reflection

Unconscious bias will always be with us, regrettably. Homo sapiens are consistently off the mark; it’s our nature. The management mechanism is self-awareness. Leaders must show vulnerability (but not too much) about their weaknesses. Tell stories about how and when you were wrong. This will go a long way toward fostering a culture of self-reflection, which is the starting point for sounder judgment. - John EvansEvans&Evans Consulting

4. Avoid ‘Selection By Impulse’

Eliminate the bias of expedience. We tend to think that our first opinion must be true. Once we have a first reaction about a candidate, our brain wants all of our subsequent reactions to support our initial reaction. To prevent this, require hiring teams to make lists of pros and cons individually, then share their lists with each other to overcome “selection by impulse.” - Sheri NasimCenter for Executive Excellence

5. Identify And Label The Bias

To remove unconscious bias from your hiring process, you must label it. You shine the light on it by stating what it is, how it reveals itself, what consequences follow and how exactly you can avoid it in the future. As a company, you must do it frequently, considering how common it is for biases’ influence on business outcomes to be underestimated. - Alla AdamAlla Adam Coaching

6. Challenge Cultural Uniformity

Unconscious biases are inherent in corporate cultures. By definition, they are unconscious, and therefore, hardly removable! What can be done in the recruitment process is to question the type of hire you want. Challenging cultural uniformity to introduce diversity requires specific recruitment processes, including interviews by external recruiters and/or a panel of people of various backgrounds. - Catherine TanneauActivision Coaching Institute

7. Regularly Review Your Hiring Process

By establishing a culture of continuous improvement, companies can put in place regular review mechanisms—on a quarterly basis, for example—for their hiring processes to identify areas where biases may be present, and then take action to address them. This may involve analyzing candidate demographics, monitoring hiring outcomes and seeking employee and candidate feedback. - Andre ShojaieHumanLearn

8. Communicate The Importance Of Objectivity

A hiring process that prioritizes skill sets and values that align with the purpose of the role will reduce unconscious bias over the long term. Start with communicating the importance of objectivity to the hiring team. Standardize interview questions to reduce bias, and allow candidates to be evaluated on the same criteria. Conduct team debriefs to reflect on the outcomes and fair hiring practices. - Priya KartikEnspire Academy

9. Evaluate Who Represents Your Company In Interviews

One step companies can take is to evaluate who represents the company in the interview process, and how. They should include individuals with diverse backgrounds in the interview process, since they bring unique lenses with which to evaluate candidates. In this same vein, companies should train interviewers on unconscious bias, because learning to identify it is the first step in overcoming it. - Savannah RayatRayat Leadership Coaching

10. Bring In Unusual Voices For Real Dialogue

Bring different voices into the process, including unusual voices, and add observers. Be serious about doing the work that it takes to understand systemic bias in your culture and processes. Learn to have real dialogue and create space for it, building the capacity to hear what you don’t like to hear through feedback and observation of the facts: Do candidates express the values you say you hold true? - Alessandra MarazziAlessandra Marazzi GmbH

11. Standardize Your Process

Design and consistently implement a structured recruiting operations process. This will ensure that every candidate flows through the hiring process following the same steps. Everyone in the process understands the role they play, and interviewers stick to the same questions and methodology for evaluating candidates. Unconscious bias increases when these standards are missing from the hiring process. - Leang ChungPelora Stack

12. Remain Mindful Of Your Own Biases

As a leader, practice self-awareness and remain mindful of any biases that could be influencing your decisions during the hiring process. Remember that you have the power to shape organizational success simply by weighing each candidate objectively and making sure they get a fair chance. This requires becoming conscious of how an applicant’s skill set can impact mission objectives! - Daphne MichaelsDaphne Micheals International

13. Seek ‘Different’ Instead Of ‘Better’

Ask yourself, “How does this candidate make us different?” Using the word “different” instead of “better” helps you see the benefits of being different, instead of seeking out more of the same. You won’t automatically hire the person who makes you the most different—but you will be more open to what you couldn’t see before. - Jamie FlinchbaughJFlinch

14. Educate And Self-Reflect

Education and self-reflection are key! Educate HR managers on potential biases, and introduce management systems and alternative dispute resolution practices that consider the interests of all stakeholders. Hold company training courses on the seven types of biases. Include self-reflection in the education process. Have managers reflect on their propensity for bias and discuss it with their supervisors. - Karina OchisProf. Dr. Karina Ochis

15. Commit To A Diverse Slate Of Candidates

The basic first hurdle is to commit to a diverse slate of candidates. My husband is a marvelous executive recruiter; one of his biggest challenges is hiring managers saying, “Bring me diverse candidates,” and then not interviewing even one diverse candidate. First, know that bias is selection through a traditional lens. Demand a diverse slate. Interview a diverse slate. Commit to diversity on your team. Just do it! - Jodie CharlopExceleration Partners


Read here

Business lobby tries to weaken law regulating bias in hiring algorithms

As AI hiring tools get more popular, they've attracted scrutiny for potential biases—and lawsuits claiming they can discriminate on the basis of age, gender, and race.

A law requiring employers to audit hiring algorithms for bias in NYC, among the first of its kind in the country, has already been watered down from its original iteration, and some advocates who pushed for the law are worried that the law will be further diluted by business interests.

There has been growing scrutiny in recent years of the use of AI hiring tools and their potential for discrimination. Among the tools facing scrutiny are video tools that evaluate candidates’ facial expressions, gamified hiring tools, and screening software that provides recommendations based on resume data. 

The U.S. Equal Opportunity Employment Commission began publishing guidances on the use of such tools in 2021, with particular focus on the many ways that such tools can violate the Americans with Disabilities Act. The EEOC released a draft enforcement plan in January that outlined how it would regulate discrimination in AI hiring tools, and sued the company iTutor for using tools that excluded older candidates

Other lawsuits have been popping up as well, including a lawsuit in California filed by a job candidate who alleges that Workday’s hiring software discriminated against him for being Black, disabled and over 40.

In NYC, Local Law 144 was passed in November 2021 with the intended start date of January 1, 2023. But confusion and disagreements over how the law would work and who would enforce it led the city to delay implementation to April 15, 2023 while it worked out the details and took input from the public. The city’s Department of Consumer and Worker Protection released the most recent version of the rules earlier this year, ahead of a January 23 hearing. The law prohibits employers from using an “automated employment decision tool” unless it has been subjected to an audit at least a year before its use and the result of that audit is made public. 

The rules clarify that an audit should compare selection rates of candidates according to gender and race, providing a chart with an example for how tools should be evaluated. Advocates for more stringent requirements as well as lobbyists for employers seem to agree that this version of the rules is not quite workable, but for different reasons. The city is still working with advocates, experts and business lobbyists to hammer out the final rules for April. 

Advocates for workers and experts on AI bias want the final rules to address other categories like disability, to broaden the scope of the law so that it does not just pertain to job screening tools and to close loopholes that would allow employers to wriggle out of a bias audit by suggesting that the tools are augmented by the judgment of human hiring managers. They also want more clear guidelines on the data that employers are using to evaluate bias. 

Business lobbyists, on the other hand, want the law mostly gutted, arguing in emails sent to supporters of the legislation that an amendment should be passed to remove any requirement for a bias audit, arguing that there is no widely-accepted way to perform such an audit accurately. They are also concerned that since race is a voluntary category on employment forms, audits would only be testing a narrow selection of candidates who have offered this information willingly. 

In a November 2022 email, Kathryn Wylde, CEO of Partnership for New York City, a lobby for large employers who want to influence civic policy and whose membership includes IBM, Citi, Morgan Stanley, American Express, Pfizer and more, wrote a plea asking for supporters of the legislation to back an amendment removing any requirement for a bias audit. The group had already presented the amendments to city council members and was now asking proponents of the legislation to back them because “the city council is looking for some indication that the groups that advocated for its passage are ok with the amendments,” according to the email.

In addition to a lack of data on race in resumes, Wylde wrote in the email to the bill’s supporters that “there are no standards established for such an audit,” so the requirement should be removed. “Would you be willing to help with amendments that respect the purpose of the law - to eliminate bias in hiring - but do not mandate actions that employers can not fulfill?,” Wylde asked. The email was accompanied by a one-page document prepared by PFNYC which said “employers have concluded that  amendments to LL144 are necessary to ensure that they can continue to use AI to help eliminate unconscious bias.” 

PFNYC argued further in the document that all of the requirements of the law are already covered by the Federal Equal Opportunity Employment Commission’s guidelines, so it would only add impossible to enforce requirements for AI tools, which the lobby says limit bias. (There is of course no evidence that algorithmic or AI job screening tools are limiting bias, and ongoing lawsuits seem to suggest the contrary) PFNYC also proposed removing a requirement that employers provide a notice to candidates about the use of AI tools 10 days ahead of time, on the basis that “This will impose delay and significant hardship on both many employers and job applicants.”

When reached for comment, Wylde told Motherboard that the email represents PFNYC’s current position on the legislation. “The legislation was enacted without employer input,” Wylde told Motherboard. “We agree that there needs to be a way to ensure that AI tools are unbiased. However, there are no accepted standards for an AI bias audit, they need to be developed. We don’t oppose the concept, but right now no one knows what would be involved in compliance. It is not good to pass laws where regulators have no clear guidelines for enforcement and employers have no clear way to comply,” Wylde told Motherboard, adding that there should be a process to develop “reasonable standards” before the law is implemented.

Ridhi Shetty, policy counsel with the Center for Democracy & Technology, said that some of the limitations on race and ethnicity data and the lack of consensus on how to conduct bias audits should not preclude the use of bias audits.

“I think there are certainly approaches to take now,” Shetty said, pointing out that The Center for Democracy & Technology published a set of standards on evaluating bias in hiring tools in December. Those standards emphasize audits on all automated decision-making tools used by employers, including those determining promotions and targeted job advertising. The suggested standards require employers to look at protected categories beyond race and sex and including disability, and generally advocate for a more system-wide evaluation of a tool’s inputs and goals rather than just a numerical score.

Shetty said that many of the widely-used rules for determining bias in hiring are outdated and were created decades ago. “It is true that it's going to be hard to standardize an approach to audit for bias in a way that's going to work for all employers and for all kinds of tools and for all kinds of discrimination,” Shetty said. But this indicates that audits move away from quantitative examinations of selection rate by race and look more at what the tools are designed to do and whether those goals could have disparate impacts, according to Shetty.

“If you're looking for certain kinds of personality traits… it's important to scrutinize why those are the criteria that you consider to be related to the job,” Shetty said.

An audit that’s qualitative wouldn’t necessarily require voluntary data on race, Shetty believes. “I would argue you don't really need people to be sharing their race or their disability or any other protected characteristic during the application process for you to be able to examine a tool or its potential impact.”

​​Lobbyists for large companies are still pushing to remove audits altogether. But the law has already been watered down from its original iteration in 2020. When it was introduced by Council members Laurie Cumbo and then Council member Alicka Ampry-Samuel, the original bill sought to address bias in hiring tools earlier in the process by putting onus on the software developers to test for bias before selling their tools to employers. The earlier bill also required bias audits to screen for all  categories covered by the city’s human rights laws and the U.S. Equal Employment Opportunity Commission, which includes disability and age.

After over a year of inaction, a revised bill was introduced on November 9, 2021 and passed almost immediately by the city council on November 10, 2021, at the behest of then mayor Bill de Blasio. This version put all the onus on employers to make sure their tools are screened before using them. While employers should share responsibility, it potentially leaves vendors of screening software off the hook, according to Shetty. 

“When you see vendors increasingly performing the functions of what has been defined as an employment agency, it becomes trickier to hold them accountable,” Shetty told Motherboard.

The law as enacted requires screening for bias by race, ethnicity and sex, categories where employers are already technically required to screen hiring tools for bias, and leaves out other protected categories. The city first released draft rules in May 2022 and has since held 3 public hearings, the latest on January 23, with a plan to release new rules prior to implementation on April 15.

Shetty has no problem with the delayed timeframe for implementation as long as the city takes input from stakeholders seriously. She believes taking time to work out the details is important, particularly since the current version of the law was rushed to vote in 2021. And while it doesn’t include any explicit requirement to audit bias in protected categories like age or disability, it could still be a powerful tool to address discrimination.

“It would be really useful to make sure that even if this law only focuses on race based or gender based discrimination, that it at least does that to the fullest extent possible,” Shetty said

Read article here

The why and the how of diversity in tech

Equitable opportunity is key for a thriving economy. Nashlie Sephus, Ph.D. is a rising star in AI software development and her groundbreaking work on mentorship sheds light on diversity in tech.

As the leader of Amazon/A9's visual search tech team, her ideas and achievements have paved the way for underrepresented communities in the industry.

Watch her Ted talk about how and why we are losing tech talent from an early age.

ChatGPT: New AI system, old bias?

Here's how this powerful tech can become more accurate and inclusive.

Every time a new application of AI is announced, I feel a short-lived rush of excitement — followed soon after by a knot in my stomach. This is because I know the technology, more often than not, hasn't been designed with equity in mind. 

One system, ChatGPT, has reached 100 million unique users just two months after its launch. The text-based tool engages users in interactive, friendly, AI-generated exchanges with a chatbot that has been developed to speak authoritatively on any subject it's prompted to address.

In an interview with Michael Barbaro on the The Daily podcast from the New York Times, tech reporter Kevin Roose described how an app similar to ChatGPT, Bing's AI chatbot, which also is built on OpenAI's GPT-3 language model, responded to his request for a suggestion on a side dish to accompany French onion soup for Valentine's Day dinner with his wife. Not only did Bing answer the question with a salad recommendation, it also told him where to find the ingredients in the supermarket and the quantities needed to make the recipe for two, and it ended the exchange with a note wishing him and his wife a wonderful Valentine's Day — even adding a heart emoji.

The precision, specificity, and even charm of this exchange speaks to the accuracy and depth of knowledge needed to drive the technology. Who would not believe a bot like this?

Bing delivered this information by analyzing keywords in Roose's prompt — especially "French onion soup" and "side" — and using matching algorithms to craft the response most likely to answer his query. The algorithms are trained to answer user prompts using large language models developed by engineers working for OpenAI.

In 2020 members of the OpenAI team published an academic paper that states their language model is the largest ever created, with 175 billion parameters behind its functionality. Having such a large language model should mean ChatGPT can talk about anything, right?

Unfortunately, that's not true. A model this size needs inputs from people across the globe, but inherently will reflect the biases of their writers. This means the contributions of women, children, and other people marginalized throughout the course of human history will be underrepresented, and this bias will be reflected in ChatGPT's functionality. 

AI bias, Bessie, and Beyoncé: Could ChatGPT erase a legacy of Black excellence? 

Earlier this year I was a guest on the Karen Hunter Show, and she referenced how, at that time, ChatGPT could not respond to her specific inquiry when she asked if artist Bessie Smith influenced gospel singer Mahalia Jackson, without additional prompting introducing new information. 

While the bot could provide biographical information on each woman, it could not reliably discuss the relationship between the two. This is a travesty because Bessie Smith is one of the most important Blues singers in American history, who not only influenced Jackson, but is credited by musicologists to have laid the foundation for popular music in the United States. She is said to have influenced hundreds of artists, including the likes of Elvis Presley, Billie Holiday, and Janis Joplin. However ChatGPT still could not provide this context for Smith's influence. 

This is because one of the ways racism and sexism manifests in American society is through the erasure of the contributions Black women have made. In order for musicologists to write widely about Smith's influence, they would have to acknowledge she had the power to shape the behavior of white people and culture at large. This challenges what author and social activist bell hooks called the "white supremacist, capitalist, patriarchal" values that have shaped the United States. 

Therefore Smith's contributions are minimized. As a result, when engineers at OpenAI were training the ChatGPT model, it appears they had limited access to information on Smith's influence on contemporary American music. This became clear in ChatGPT's inability to give Hunter an adequate response, and in doing so, the failure reinforces the minimization of contributions made by Black women as a music industry norm.

In a more contemporary example exploring the potential influence of bias, consider the fact that, despite being the most celebrated Grammy winner in history, Beyoncé has never won for Record of the Year. Why? 

One Grammy voter, identified by Variety as a "music business veteran in his 70s," said he did not vote for Beyoncé's Renaissance as Record of the Year because the fanfare surrounding its release was "too portentous." The impact of this opinion, unrelated to the quality of the album itself, contributed to the artist continuing to go without Record of the Year recognition. 

Looking to the future from a technical perspective, imagine engineers developing a training dataset for the most successful music artists of the early 21st century. If status as a Record of the Year Grammy award winner is weighted as an important factor, Beyoncé might not appear in this dataset, which is ludicrous. 

Underestimated in society, underestimated in AI 

Oversights of this nature infuriate me because new technological developments are purportedly advancing our society — they are, if you are a middle class, cisgender, heterosexual white man. However, if you are a Black woman, these applications reinforce Malcolm X's assertion that Black women are the most disrespected people in America

This devaluation of the contributions Black women make to wider society impacts how I am perceived in the tech industry. For context, I am widely considered an expert on the racial impacts of advanced technical systems, regularly asked to join advisory boards and support product teams across the tech industry. In each of these venues I have been in meetings during which people are surprised at my expertise. 

This is despite the fact that I lead a team that endorsed and recommended the Algorithmic Accountability Act to the U.S. House of Representatives in 2019 and again in 2022 and the language it includes around impact assessment has been adopted by the 2022 American Data Privacy Act. Despite the fact I lead a nonprofit organization that has been asked to help shape the United Nations' thinking on algorithmic bias. And despite the fact that I have held fellowships at Harvard, Stanford, and the University of Notre Dame, where I considered these issues. 

Despite this wealth of experience, my presence is met with surprise, because Black women are still seen as diversity hires and unqualified for leadership roles.

ChatGPT's inability to recognize the impact of racialized sexism may not be a concern for some. However it becomes a matter of concern for us all when we consider Microsoft's plans to integrate ChatGPT into our online search experience through Bing. Many rely on search engines to deliver accurate, objective, unbiased information, but that is impossible — not just because of bias in the training data, but also because the algorithms that drive ChatGPT are designed to predict rather than fact-check information. 

This has already led to some notable mistakes.

It all raises the question, why use ChatGPT? 

The stakes in this movie mishap(Opens in a new tab) are low, but consider the fact that a judge in Colombia has already used ChatGPT in a ruling — a major area of concern for Black people. 

We have already seen how the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm in use in the United States has predicted Black defendants would reoffend at higher rates than their white counterparts. Imagine a ruling written by ChatGPT using arrest data from New York City's "Stop and Frisk" era, when 90 percent of the Black and brown men stopped by law enforcement were innocent.

Seizing an Opportunity for Inclusion in AI 

If we acknowledge the existence and significance of these issues, remedying the omission of voices of Black women and other marginalized groups is within reach. 

For example, developers can identify and address training data deficiencies by contracting third-party validators, or independent experts, to conduct impact assessments on how the technology will be used by people from historically marginalized groups. 

Releasing new technologies in beta to trusted users, as OpenAI has done, also could improve representation — if the pool of "trusted users" is inclusive, that is. 

In addition, the passage of legislation like the Algorithmic Accountability Act, which was reintroduced to Congress in 2022, would establish federal guidelines protecting the rights of U.S. citizens, including requirements for impact assessments and transparency about when and how the technologies are used, among other safeguards. 

My most sincere wish is for technological innovations to usher in new ways of thinking about society. With the rapid adoption of new resources like ChatGPT, we could quickly enter a new era of AI-supported access to knowledge. But using biased training data will project the legacy of oppression into the future.

Read article here ->