Why should we focus on AI for women

The story started with my conversation with my girlfriend last week. He, he is interested in doing medical research, said that women were often seen. There are many negative false times among women because the first Stroke research was mainly held in male classes. As a result, the symptoms seen in women – often differ from those observed from people – could not be recognized at the clinic.
The same issue has been noted in diagnosis of skin cancer. People with dark skin tones have a small opportunity for accuracy.
Such examples show that data management and design designs can lead to harmful effects. We live in a time when AI is available in almost every domain – and cannot be prevented that prejudice is eaten in these programs. I even see doctors using Chatbot tools as medical helpers while writing instructions.
From this subject, before the title or theme is fully read between different groups – such as those who are based on sex or ethnic availability of AI are scientifically, scientifically, and morally. AI programs are not just receiving the existing legacy as a person's attitudes, but they may also aim to research these research materials within their own technical structures.
In this post, I will walk in the case lesson from my own experiences
Case Study: Hot comfort
Two years ago, I worked in the project to expand the ability to work in the building while storing a hot comfort. This raised an important question: What is a hot fuel? In many office structures or commercial facilities, the answer is a fixed temperature. However, research has shown that women are more reporting more dissatisfied with men under similar tropical conditions (indragent & humipreys, 2015). Without a great science investigation, I, and some women's fellow colleagues, all reported to hear cold during work hours.
Now we will create a temptation to imitate how gender is important in explaining hot comfort, and other real world conditions.
Setup settings
Now we imitate two people – men and women – with different thermal preferences. This difference can seem very important at the beginning, but we will really see something in the next chapter, when we introduce a tightened reading model (RL) to learn the right temperature. We see how satisfying women live when the agent is only trained by men.
We begin by explaining a formal Thermal Comfort model that is inspired by the foretold PMV. Each heat was allocated to the comfort score described as Max (0, 1 – DIST / ZONE), based on how close value is in this Gender Coming Coming:
Men: 21-23 ° C (focused on 22 ° C)
Women: 23-25 ° C (focused on 24 ° C)
By definition, continuous temperature moves from the center of the floor, the extra loss of comfort points.
Next, imitate the simplest room location like a place where the temperature controls. Three possible actions:
- Reduce temperature by 1 ° C
- Keep temperatures
- Increase temperature by 1 ° C
Environment updates the temperature accordingly and returns a reward based on comfort.
The purpose of Agent is to grow this reward over time, and learn the planning of the well-resident temperature. See the below code to imitate the environment.
RL AGENT: Q-Learning
We use a Q, letting agency to participate with the environment.
It reads the right policy by renewing the Q-, when rewards for the conservation of the State-Action Astering. Agent Balance Check – that is trying to be a random action – and exploitation – that is, choosing the most well-known actions – as it read the most well-known strategy by heating.
class QLearningAgent:
def __init__(self, state_space, action_space, alpha=0.1, gamma=0.9, epsilon=0.2):
self.states = state_space
self.actions = action_space
self.alpha = alpha
self.gamma = gamma
self.epsilon = epsilon
# Initialize Q-table with zeros: states x actions
self.q_table = np.zeros((len(state_space), len(action_space)))
def choose_action(self, state):
if random.random() < self.epsilon:
return random.choice(range(len(self.actions)))
else:
return np.argmax(self.q_table[state])
def learn(self, state, action, reward, next_state):
predict = self.q_table[state, action]
target = reward + self.gamma * np.max(self.q_table[next_state])
self.q_table[state, action] += self.alpha * (target - predict)
We have renewed our Q q table by allowing an agent to select an actual known action based on current environment or random action. We control trading Epsilon, here, 0.2-represented the uncertainty we want.
Discriminatory training and testing
As promised earlier, we train a agent using only men's data.
We allow an agent to share with the nature of 1000 episodes, 20 steps each. It gradually learns how to associate demand temperatures have higher men.
def train_agent(episodes=1000):
env = TempControlEnv(sex='male')
agent = QLearningAgent(state_space=env.state_space, action_space=env.action_space)
rewards = []
for ep in range(episodes):
state = env.reset()
total_reward = 0
for step in range(20):
action_idx = agent.choose_action(state - env.min_temp)
action = env.action_space[action_idx]
next_state, reward, done = env.step(action)
agent.learn(state - env.min_temp, action_idx, reward, next_state - env.min_temp)
state = next_state
total_reward += reward
rewards.append(total_reward)
return agent, rewards
The code reflects the general process of Q-Learning process. Here is the site of the learning curve.

We can now examine the treatment of males how to do when they are placed in a woman's comfort zone. The test is performed in the same natural environment, only in model to find different comfort in different comfort to show preferences for women.
Result
The test shows the following result:
The agent has achieved a general reward of the 16.08 comfort in each piece of males. We see that it has been successfully learning how to keep temperatures around a list of male comfort (21-23 ° C).
The agent's operation dropped with a reward between 0.24 with each episode in women's comfort. This indicates that the policy trained by men, unfortunately, cannot be performed in general women's comfort requirements.

Therefore, we can say that a model, only trained in one group, may not be well in the use of another, no difference between the small groups.
Store
This is a little small and simple example.
But it can blossom for a serious problem: When AI models are trained for data in one or fewer groups, they have certain accidents of failing to meet the needs of others – even when the difference between groups seem less. He sees that a training agent is trained by the above man fail to satisfy women's comfort, and proves that discriminating training details reflect directly to results.
This may pass beyond the office temperature case. In many cases such as health, financial, education, etc.
For the learners, this means asking how the systems of the Ai surround us are designed and oppressive and righteousness in their construction. It also means recognizing the limitations of “one-fits-fits-flits-all” solutions and supporting methods that view various experiences and requirements. Only then AI can really work for everyone equally.
However, I always feel that empathy is very difficult in our society. Differences in race, gender, wealth, and culture makes it very difficult for us most of us to stay on the shoes of others. AI, the data system conducted by the data, can not only easily be able to be the existing person's intellectuality but also empower these research materials. Groups are already accepted can get too little or, bad, and reduced.


