
Sauti Yangu From Deep Learning Indaba 2019
This article is intended for all enthusiasts of Artificial Intelligence (AI), machine learning, deep learning, Data Science, Big data, etc. After taking part in the Indaba X Senegal organization as a member of the GalsenAI community, this year was my first attendance to the Deep Learning Indaba held in Nairobi, Kenya. I am about to immerse you in an overwhelmed week of sharing knowledge and culture in the field of deep learning.
This article is divided into two parts. In part one, I will talk about Keynotes, and part two will focus on sessions, practicals and Hackathon.
What is Deep Learning Indaba?
This is a question that comes up all the time, and I’m always happy to answer it, because in this way that we can contribute together in the raise of Sauti Yetu (our voice). The words “Sauti Yetu” (our voice) and “Sauti Yangu” (my voice) come from Swahili, which is a language shared by the countries of East Africa. The use of “Sauti Yangu” (my voice) is justified in relation to my own experience at Deep Learning Indaba 2019.
Everything has a story, an origin so first of all, what is Indaba ? Indaba is a Zulu word that means gathering. Deep Learning Indaba is an annual gathering of communities and machine learning enthusiasts in Africa. It aims to strengthen the African machine learning community through feedback from researchers, professionals and students in the field. It is a week of sharing, learning, research, debates on the state of the art of machine learning and Artificial Intelligence. This year, the event was held from August 25th to 30th, 2019 at Kenyatta University, and I thank the members for the warm welcome they gave us. The event welcomed about 700 members and more than 27 different nationalities, a mix of fantastic culture.
Now let’s talk about Keynotes.
Keynotes:
This year the # DLIndaba2019 hosted several keynotes including:
Keynote 1: Innovations for global impact, My journey as a researcher
This keynote was moderated by Aisha Walcott-Bryant of IBM Research Africa Kenya, PhD Research Scientist Manager of AI Science and Engineering Water-AI-Healthcare.
Leveraging AI for innovative solutions in Africa was the general idea of ??its presentation. Indeed, with more than 1 billion people, more than 2000 languages ??and over 3,000 ethnic groups, the “Problems Encountered in the Context of Africa are Rich and Complex as the Solutions We Create Here Will Benefit the World.” Issues raised that could lead to several avenues for reflection.
Who else will develop drug discovery algorithms that include the comorbidities of Africa?
Who else will understand the link between water and agriculture to enhance food security and end hunger?
Who else will create learning technologies to help coding students switch between local and national languages?
The questions above may inspire your researcher or entrepreneur’s mind to think about creating an appropriate solution.
The IBM Africa Research Scientist gave three examples of innovative solutions:
- Care coordination with a case study in Kenya on chronic disease management.
- Malaria intervention and policy planning using Machine Learning to target the most affected countries.
- Addressing the challenges of mobility and traffic in developing cities (Kenya Nairobi case study)
These solutions are the result of a well-defined work process and the commitment of all stakeholders such as:
- Immersion into the problem domain
- Design Thinking
- Academic collaborations
- Cross-lab collaborations
- Ecosystem engagement
- Minimally viable products (MVPs)
- Pilots and studies, etc…
Keynote 2: Leveraging Bioinformatics and AI to deliver more precision
Abdoulaye Baniro Diallo Ph.D., Co-Founder and Scientific Director of MIMS presented his work on how to use Bioinformatics and AI to give more precision in health and food production. For those who do not know, Bioinformatics is a science that combines biology, statistics and computer science. This image is an illustration of the synergy of work that could be created between biologists, data scientists and researchers in the medical field.

Since 2006, we have seen an exponential growth of genetic data. These data come in different forms. Faced with this, the problems of AI in the science of life are important:
- Running time
- Algorithm accuracies (including AI)
- Handling massive data
- Intensive Computation
- Multiple complex models
- Tracking knowledge and science quality
- Dealing with heterogeneous and/or Partial data
In this sense, the important topics where AI can impact are:
- Acute Care Predictive Analytics
- Intensive Ambulatory Care Analytics
- Population Health
- Aging In Place Analytics
- Genomics / Oncology
- Radiographic Image Evaluation
- Physician Workflow
- IoT Use / Optimization
- Image Integration (Different sources)
- Tissue Evaluation (Quantitative Morphology)
To deploy a machine learning solution in the health field, you need a well-defined roadmap. The following image is an example to follow.

Added to this are the challenges of data volume, temporality, domain complexity and interpretation. AI is nascent in the field of health, there are opportunities to be grasped such as: enrichment of functionalities, federated inference, the confidentiality of models, integration of specialized knowledge, temporal modeling, interpretable modeling .
Finally, AI can be applied in the prediction of risk diseases, personalized prescriptions, treatment recommendation, etc.
Keynote 3: beyond buzzwords (Innovation, Inequity, and Imagination in the 21st Century)
Ruha Benjamin made us fly over buzzwords (AI, Deep Learning). This keynote, in my opinion, can not be described perfectly, it was just great to attend. It is true that we are talking about Deep Learning, AI, Machine Learning, but are we not recreating our own society with all its defaults in these algorithms. People’s thirst for power, always wanting to trample on their neighbor.
In 1863, Abe Lincoln freed the slaves, but in 1965, slavery will return! We will all have personal slaves again … Do not worry. We are talking about “slave” robots … This vision of Abe Lincoln, wouldn’t it be repeating itself. Beyond diversity, the code of judgments algorithms. Discriminatory designs of the time, they would not be happening on our algorithms. “True compassion is more than throwing a coin at a beggar. This happens when the building, which produces beggars, needs restructuring. -MLK.
What we can take into account is that:
- racism is productive; it constructs
- race & technology are co-produced
- imagination is a field of action
Here are two Ruha books that will give you a clear understanding of the problem (see image).

An example of an application that lends you money when needed in Kenya. The poor were the first to benefit from this digital loan application, and now they call it slavery. As one Nairobi researcher told, these apps “give you money gently, then they come for your neck.” Algorithms that give discriminations by defaults. It is our fault that the AI ??thinks that the names of white people are more pleasant than those of black people. The widely used language processing algorithms, driven on the Internet by human writing, reproduce human prejudices according to racist and sexist criteria. An article by Jordan Weismann in October 2018 says that “Amazon has created an AI-based recruitment tool. He begins to discriminate directly against women “. There is a technological responsibility that we must adopt in order not to reproduce the history. We are witnessing constructions of communities like “Deep Learning Indaba”, “Data for Black Lives”, “Detroit Community Technology Project”, “Digital Defense Playbook”, The final proposition is if inequality is integrated into the fabric of society, every turn, every reel, code is an opportunity for us to weave new models, new practices, and politics. It’s vastness will bt its undoing once we accept that we are “pattern makers”.
Keynote 4: Seeking Research Impact at the Grassroots – The Role of AI
Dr. Ciira wa Maina from Dedan Kimathi University honored us with being our 4th Keynote speaker. Always in the same direction how AI could impact the development of our continent. This impact will be done mainly through a mentorship:
One of the major responsibilities of the older members of the Amnesty International community is to provide examples of important work. It is important that students among us identify models. He told us about a study he conducted in Kenya on a bioacoustic project. Combine the IoT and the Learning machine to solve the ecological problems marked by a degradation of the environment, the change of climate. All over the world, livelihoods have been greatly affected. This is a motivation to find solutions that will preserve our ecosystem. The data used for this study come from both sides:
- Acoustic data – 2700 recordings
- Traditional bird survey methods – point counts
- 300 recordings annotated by expert ornithologists from NMK
- Data available on Data Dryad
The first step was to examine the frequency of species occurrence in point counts and audio annotations. The data contain the species of our study perimeter. 6 records in the foreground of the crowned eagle and 12 recordings in the foreground of Hartlaub’s Turaco.
Why the need to use Machine Learning?
- the acoustic recordings generate a lot of data,
- time and expensive data annotation.
The training data of the first model come from the site Xeno-canto ( http://www.xeno-canto.org ) which is a site dedicated to the sharing of the recording of bird sounds all over the world. These 60 go size data come from South America and were used in the Birdclef 2016 Challenge
(https://www.imageclef.org/lifeclef/2016/bird ). After obtaining the spectrograms, these are used as input into a CNN of six convolutional layers and 2 fully connected layers.
New architectures include recursive layers to account for temporal dependency of features.
A pre-trained model using “BirdClef” data is then adjusted using Xeno-canto data for Kenya. The results on South American species were used to obtain Kenyan species results using Transfer Learning. The prospects for this project will be to improve the accuracy of the model, explore the “Deep Architecture”, improve the data processing. The keys to impact in this area of ??the AI ??are among others:
- focus on a big problem
- focus on data acquisition and familiarity
- collaboration with experts in the field
- patience
IoT projects in agriculture and water monitoring are in sight.
Keynote 5: The future of Multitask Learning
Today, we are witnessing a remarkable evolution of AI thanks to the computing power of computers. Richard Socher, Chief Scientist at Salesforce provides insight into what can be expected from the future of AI. From machine learning with feature engineering to architectures of a single task, the future of IA is promising with multitask learning.

Learning patterns of a single task have limitations such as:
- Significant performance improvements over the last few years, considering datasets, tasks, models and metrics.
- We can climb to local optima as long as the data set is big enough
- For generalist AI, we need continuous learning in one model
- The models usually leave at random or are only partially pre-trained
This expectation of the advance of the NLP is explained by the fact that the latter requires a lot of types of reasoning, requires a short and long-term memory, the fact that it is divided into intermediate and disparate tasks and also the languages ??seem to require a lot of supervision in nature. With multi-task models, we will have a real generalist NLP model, we could let the models choose how to transfer knowledge, a perfect crucible to study:
- Sharing of weights and models
- Transfer learning (ideally for improved performance)
- Zero-shot learning
- Domain adaptation
One model is easier to deploy in production and makes it easier for anyone to solve their NLP problem.
So for an AI future, we have to expect a development of this multi-tasking learning model.
Keynote 6: Towards Improving Health Decisions with Reinforcement Learning
Presented by Finale Doshi-Velez, the theme of this keynote was decision-making in uncertainty. More specifically focused on the following question:
how can reinforcement learning help in decision making?
We have two approaches to this type of problem that are:
- the common approach: creating the model
Creating a model to solve the problem in the long run, but a realistic model remains a challenge. (e.g. Ernst 2005, Parbhoo 2014, Marivate 2015)
- the common approach: Find a neighbor
Apply the kernels to predict immediate outcomes (eg, Bogojeska 2012), but fail if there are no neighbors.
These approaches have complementary strengths.
- Group patients can be better modeled by their neighbors.
- Patients without neighbors can be better modeled with a parametric model.
L’approche : regrouper les prédicteurs
HIV management’s app
With 32,960 patients from the EU Resist database; we retained 3,000 for the tests. Observations on CD4s, viral loads, mutations. Actions with the combination of 312 common drugs (from 20 drugs). It only remains to put the data in
the model. Thus, we notice that our initial hypothesis was correct. The model used with neighbors are far apart.
Sepsis’s management’s app
With a cohort of 15,415 patients with sepsis from the MIMIC dataset (identical to Raghu et al., 2017); contains vital signs and laboratory tests. The action is to focus on vasopressors and fluids, used to manage circulation. The goal of this work is to reduce mortality to 30 days. To increase confidence in our results, we can check if our policies are reasonable. For more details on this keynote, send me a mail request for access to the drive.
Keynote 7: Some recent insights on Transfer Learning
This keynote is as technical as the previous one and was introduced by Samory Kpotufe of Columbia University.
The problem is the following:
Can we quickly implement knowledge transfer in ML?
In this keynote of Transfer Learning, we focus on the covariate shift that is to say that we drive the algorithm on a population P but we aim at a population Q. Example: a vision software-driven on a population US to be deployed in Kenya. Problems related to computer vision are everywhere even in the USA: known breeds + gender bias in vision software (35% of women with darker skin badly identified by Microsoft Vision, 2018). This problem is found in several critical applications: AI in the justice system, medicine, genomics, insurance industries, etc.
This problem is often just financial. Data is expensive and difficult to obtain. The obvious solution is to make it cheaper by making less sampling of the target data possible. The question is how much source data can we use?
The basic questions to ask are:
- does the source P contain enough information about the target Q?
- if no, how much new data should we collect and how ?
- would the untagged target data be sufficient? Where to help at least ?
What needs to be understood here is the relative advantage of source and target samples.
If you want to know about the technical details, you can request by email access to my shared folder.
As mentioned at the very beginning this is the part one and we come to the end of it, I think you enjoyed in reading it. Get ready for the part two coming soon. Clap it up and feel free to leave a comment too.
In this second part, I will talk about Sessions (special and parallel), then Practicals and finally Hackathon.
Special sessions :
The special sessions took place in the Indaba Research Day.
They were 21 to present their research project separated into 2 spotlights.
First Spotlight session:
- Jecinta Mulongo, Anomaly Detection in Power Generating Plant using Machine
Learning Techniques
- Fauste NDIKUMANA, Monitoring System to Strive against Fall Armyworm in Crops
Case Study: Maize in Rwanda
- JEAN AMUKWATSE, Soil Mobile Tester Laboratory (SoMiT Lab)
- Edna Milgo, A Stochastic Optimization Based MCMC
- Olaniyan Oluwasegun Emmanuel, Development of a Multi-Target Regression Models
to Predict the Physical and Chemical Properties of Soil
- Mohamed Tarek Shaaban Dawoud, Sim-to-real Conditional End-to-End Self-Driving
Vehicle Through Visual Perception”
- Deborah Dormah Kanubala, Risk scoring algorithms for farmers in the smallholder
setting.
- Honoré Mbaya, Prototype of semantic technology for the Congo-Africa review
- Kale-ab Tessera, Learning compact, general-purpose neural network architectures.
- Allan Ocholla, Algorithmic Governance: The New Normal
- Pius Nyanumba, Gaussian Process Modelling in Rotation Measure Synthesis
Second spotlight session:
- AYADI Alaeddine, Classification of product pose view using a unified Embedding with
Hard Triplet Loss and Gradient Boosted Tree models
- Arnaud Nzegha, Data augmentation and 3D Reconstruction
- Mouad Riyad, Deep convolutional-recurrent neural network for SMR classification
- Ali Bosir, Automation using IOT and NLP
- Sam Masikini, Challenges in The Automatic Recognising and Counting Malawi
Banknotes
- Francis Chikweto, Patient Vital signs monitoring with deep Learning integration
- Getenesh Teshome Bidirectional Attentive Matching Based Textual Entailment
Recognition using Deep Learning
- Samantha Van Der Merwe, Predicting Social Unrest in South Africa
- Elizabeth Benson, Urban Highway Traffic Routing And Prediction Model with HMMs
- Rihab Gorsane, Hybrid approach for order-based optimization using Evolutionary
Algorithms: Case of Capacitated Vehicle Routing Problem.
How to write a great research proposal ?
Presented by Daniela Massiceti (University of Oxford), Laura Sevilla (University of Edinburgh) and George Konidaris (Brown University).
A research proposal is a description of the work you want to do during your PhD (3-5 years of research) and your motivation.
The proposal must be silk:
- New, something that has never been done
- Better (more accurate and more efficient)
This research proposal is used when:
- Doctoral application!
- Grants: funds awarded for projects, often from government, business, etc. Seniors (Like teachers)
- Fellowships and scholarships: funds for students, post-docs or even professors.
77% of respondents think this is the most important document or the second in your PhD application.
A PhD is a university degree that validates your research skills. The prerequisites are different according to the country:
– In the United States: A doctorate lasts five or six years, two of which consist of taking courses and doing research for the rest. A master’s degree could be an advantage, but it’s not necessary. Students tend to graduate with 3 strong conference papers of which they are the first authors.
– In the United Kingdom, Germany, Sweden, etc.: a doctorate lasts 3 to 4 years and courses are often optional. It is therefore useful to come with a master’s degree, although this is not necessary. The best students can also graduate with 3 papers, but this is not a requirement.
– In Africa: A doctorate is usually a three-year degree, followed by a thesis, followed by a master’s degree.
Examples of job opportunities:
– University professor: research, teaches, mentors.
– Research Scientist (For example: in a company): writes articles, maybe a little product development
– Research Engineer (For example: in a company): develops the engineering infrastructure needed to carry out the research.
A PhD allows you to have a certain flexibility of work, to easily obtain a visa, a more independent work, more creative. The salary can easily exceed $ 120K per year.
Parallel sessions :
I did not summarize for these sessions. They are at number 15. You can look in the list below those who speak to you the most and make a request by mail to access the presentations.
AI in Kenya (ANALYTICS AT AFRICASTALKING, Design Thinking in Data Science, Transfer Learning in Credit Scoring)
AI_Fairness (Deep Learning for Diabetic Retinal Disease Diagnosis in Zambia, Explaining Deep Learning for natural Language understanding, ICT access in Africa the age of Artificial Intelligence)
Data science in practice.
ML in Resource Constrained Environments
Machine Learning for Health
Natural Language Processing
Bayesian optimization and la Hyperparameters research
Deep generative models
Deep learning fundamentals
Reinforcement Learning Fundamentals
Recurrent Neural Networks
Informatique, Caméra piège photographique et conservation : comment l’apprentissage automatique peut-il informer l’écologie ?
Measure development economic from space
STARTUPS & INNOVATION
Introduction to bayesian inference
Practical Work :
It’s good to read the work of others to try to understand, but to be good at AI, Data Science, you have to practice. These practical sessions are somehow a compliment or the applied part of some parallel sessions. The practical work that we had to do during the Indaba are as follows:
- Introduction to python
- Machine Learning Fundamentals
- Build your own TensorFlow
- Deep Nets fundamentals
- Optimization
- Conv nets
- Generatives Models
- Recurrent nets
- Reinforcement Learning
Hackathon Track :
Two hackathons were organized throughout the week including:
- Snapshot Serengethi

Snapshot Serengeti is the world’s largest camera project with 225 camera traps
Run continuously in Serengeti National Park, Tanzania.
There were 3 challenges to overcome:
Species Recognition Challenge:
Obtain the best average accuracy per species.
Animal counting challenge :
- Count the number of animals in each picture.
- Creative challenge: everything is allowed. Be creative!
- Working groups were formed, I participated as a member of the group “HackSerengethi”.
- Congratulations to the winners of the challenges.
The Hackathon Team (Participants and Organizers)

- Malaria focus on Reinforcement learning

Published on the Zindi website, this challenge was open only to participants of the 2018 Deep Indaba Learning. Malaria is believed to have been the burden of the heaviest disease of all time, while it continues to represent a significant and disproportionate global health burden. Participants used reinforcement learning to identify new solutions that could affect malaria policy in sub-Saharan Africa. Specifically, challenge participants presented solutions to determine how combinations of interventions controlling transmission, prevalence, and health outcomes of malaria infection should be distributed in a simulated human population. It is a challenge for Africa to eradicate this disease that continues to wreak havoc and we believe that with artificial intelligence a solution is possible.
I conclude this article by thanking the entire Deep Learning Indaba team, this year’s coordinator Katleen Simiyu, the University of Kenya, participants from all over the world and finally the Google Deepmind team who sponsored all Fresh from my trip especially Avishkar Bhoopchand and Ulrich Parquet with whom I share this beautiful photo.
#SautiYetu, #DLIndaba2019, #GalsenAI, #IndabaXsn
Media Library
#Datascientistenthusiast