Deep Learning Pioneers Yann LeCun and Yoshua Bengio Elected as AAAI-20 Fellows
By Avraham David Sherwood
The Editor in Chief
The world’s one of most renowned association of AI, the AAAI or Association for the Advancement of Artificial Intelligence has announced the election of ten AAAI 2020 Fellows and that also includes our Deep Learning pioneers and the 2018 Turing Award Winners the Yann LeCun and Yoshua Bengio.
The Association for the Advancement of Artificial Intelligence launched this Fellows Program in 1990 to annually honor individuals who have made significant, sustained contributions to the field of AI.
Generally, members who have been active in the field for a decade or more are eligible for selection. The AAAI is an international scientific society devoted to promotes research in, and responsible use of AI.
The announcements came in advance of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), a top-tier AI gathering that will be held February 7–12, 2020 at the Hilton New York Midtown, New York, USA.
The association aims to increase public understanding of artificial intelligence (AI), improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions.

Yoshua Bengio is Professor of Department of Computer Science and Operations Research, Université de Montréal, and Canada Research Chair in Statistical Learning Algorithms. He was one of the first to combine neural networks with probabilistic models of sequences, which has been extended to apply to speech recognition tasks. The well-known concept of word-embedding – a language modeling and feature learning paradigm which aims to map vocabularies to vectors was also introduced by him. Furthermore, he is one of the contributors on generative adversarial networks (GANs), a revolutionary methods for the application of machine translation, image generation, audio synthesis etc.

His research in Montreal helped to drive the progress of systems that aim to understand natural language and technology that can generate fake photos that are indistinguishable from the real thing.
Bengio also works with Yann LeCun on computer vision breakthroughs when they were at Bell Labs, went on to apply neural networks to natural language processing, leading to big advances in machine translation.
Currently, Bengio is a professor at the University of Montreal and the science director of both Mila (Quebec’s AI Institute) and the Institute for Data Valorization. More recently, he has worked on a method best known for enabling neural networks to create completely novel, but highly realistic, images.

Yann LeCun is well known for his work on optical character recognition and computer vision using convolutional neural networks (CNN) and he is a founding father of convolutional nets.
He developed the convolutional neural networks and was the first to train neural networks for handwritten digits recognition in the 1980s and contributed to an early version of the back-propagation algorithm.
He is also one of the main creators of the DjVu image compression technology (together with Léon Bottou and Patrick Haffner). He co-developed the Lush programming language with Léon Bottou.
Yann LeCun is Scientific Director of artificial intelligence for Facebook and professor at New York University.
LeCun developed and grow the mindblowing capabilities of neural networks and make them more powerful and useful. LeCun pioneered, including backpropagation and convolutional neural networks, have become ubiquitous in AI, and, by extension, in technology as a whole.
LeCun – together with Geoffrey Hinton and Yoshua Bengio are referred to by some as the “Godfathers of AI”.
The other new AAAI Fellows are:
- Cynthia Breazeal, a pioneer of social robotics and human–robot interaction; William T. Freeman, a computer vision pioneer.
- Radhika Nagpal, co-founder of Root Robotics Company.
- Natasha Noy, contributor of Protégé ontology editor, the Prompt alignment tool and Google Dataset Search.
- Martha Palmer, contributor of verb semantic and creator of ontological resources such as PropBank and VerbNet.
- Dragomir R. Radev, secretory of ACL and and associate editor of JAIR.
- Thomas Schiex, pioneer of constraint satisfaction problem on computing biology.
- Journal Sylvie Thiebaux co-editor in chief of Artificial Intelligence Journal.

Tags: A/B testing, AAAI, AAAI 2020, AAAI-20, AAAI-20 Fellows, Abstract, academia, acceleration, Accuracy, ACL, action, activation function, active learning, AdaGrad, administrative perspectives, agent, agglomerative clustering, AI, AI developments, AI methods, AI system, Algorithms, analytics, analytics space, and not the other way around, application design, applied machine learning, ar, area under the PR curve, area under the ROC curve, artificial general intelligence, artificial intelligence, Artificial Intelligence Journal, artificial intelligence-based systems, association, Association for the Advancement of Artificial Intelligence, attention, attribute, AUC (Area under the ROC Curve), Augmented Reality, automation bias, Automation of Machine-Learning, Autonomous driving, available, average precision, backpropagation, bag of words, baseline, batch, batch normalization, batch size, Bayesian, Bayesian neural network, behavior, Bellman equation, bias (ethics/fairness), bias (math), biases, Big Data Analytics, bigram, binary classification, binning, boosting, broad spectrum, broadcasting, bucketing, calibration layer, Canada Research Chair in Statistical Learning Algorithms, candidate generation, candidate sampling, categorical data, centroid, centroid-based clustering, checkpoint, class, class-imbalanced dataset, classification, classification model, classification threshold, clipping, Cloud TPU, clustering, co-adaptation, collaborative filtering, comments, Community, Computational neural networks, computer vision, Conference, confirmation bias, confusion matrix, content, context, Continuous delivery, continuous feature, control, convenience sampling, convergence, convex function, convex optimization, convex set, convolution, convolutional filter, convolutional layer, convolutional layers, convolutional neural network, convolutional operation, core principles, cost, counterfactual fairness, coverage bias, crash blossom, critic, Cross validation, cross-entropy, custom Estimator, Cyber, Cynthia Breazeal, data, data analysis, data augmentation, Data Quality, Data Science, Data Scientists, data set or dataset, database, DataFrame, Dataset API (tf.data), datasets, decision boundary, decision maker, decision making, decision threshold, decision tree, decision trees, decisions, deep learning, Deep Learning Pioneers, deep model, deep neural network, Deep Q-Network (DQN), delivering, demographic parity, dense feature, dense layer, depth, depthwise separable convolutional neural network (sepCNN), DESIGN, designed, Developers, device, DevOps, Dialogue Bots, different hyperparameters, different initializations, different overall structure, dimension reduction, dimensions, discrete feature, discriminative model, discriminator, disparate impact, disparate treatment, divisive clustering, domains, downsampling, DQN, Dragomir R. Radev, dropout regularization, dynamic model, eager execution, early stopping, economic and social obstacles, Education, effective removal, Elected, embedding space, embeddings, empirical risk minimization (ERM), engineering, ensemble, environment, equal groups, equal treatment, errors, especially, establish, ethical professional duty, Ethics of artificial intelligence, events, Evitar Matanya, Evitar Matanya Prof. Karine Nahon, excellent, experience, experimenter’s bias, Explaining of Israel, Fellows, fine tuning, Fintech, focus, forget gate, formal, framework, full softmax, fully autonomous, fully connected layer, Future of AI, GAN, generalization, generalization curve, generalized linear model, generative adversarial network (GAN), generative model, generator, Godfathers of AI, Google Dataset Search, gradient, gradient clipping, gradient descent, Gradient descent algorithm, graph, graph execution, great, great success, greedy policy, ground truth, group attribution bias, hashing, Healthcare, helpful feedback, heuristic, hidden layer, hierarchical clustering, high-quality, highlight, hinge loss, holdout data, human, hyperparameter, hyperplane, i.i.d., ideas, image recognition, imbalanced dataset, impact, Implementation of principles, implicit bias, improvement, in every context, in-group bias, including, incompatibility of fairness metrics, increasing, increasingly, independently and identically distributed (i.i.d), individual fairness, individuals, industry, industry tracks, Inference, innovation, innovative, input function, input layer, instance, Intelligent robots, inter-rater agreement, interpretability, iot, item matrix, items, iteration, JAIR, Journal Sylvie Thiebaux, k-means, k-median, Keras, Kernel Support Vector Machines (KSVMs), label, labeled example, lambda, Large scale analytics, layer, Layers API (tf.layers), leading experts, learning rate, least squares regression, Léon Bottou, liberty, linear model, linear regression, Log Loss, log-odds, logistic regression, logits, Long Short-Term Memory (LSTM), loss, loss curve, loss surface, LSTM, Lush programming language, Machine ethics, MACHINE LEARNING, Machine learning systems, majority class, Markov decision process (MDP), Markov property, Martha Palmer, matplotlib, matrix factorization, Mean Absolute Error (MAE), Mean Squared Error (MSE), metric, Metrics API (tf.metrics), Mila, mini-batch, mini-batch stochastic gradient descent (SGD), minimax loss, minority class, ML, MNIST, model, model function, model training, Momentum, momentum (Momentum), Montreal, moral and ethical use of AI technologies, more complex math (Proximal and others), multi-class classification, multi-class logistic regression, multi-class regression, multinomial classification, N-gram, NaN trap, Natasha Noy, Natural language processing, Natural Language Understanding, negative class, neural network, neural networks, neuron, Neutrality, new moral and ethical challenges, new regulatory, New York University, NLU, node (neural network), node (TensorFlow graph), noise, non-arbitrary, non-response bias, normalization, Nowadays, numerical data, NumPy, objective, objective function, offline inference, one-hot encoding, one-shot learning, one-vs.-all, online inference, Operation (op), optimizer, organizing conference, out-group homogeneity bias, outliers, output layer, Overfitting, pandas, parameter, Parameter Server (PS), parameter update, partial derivative, participants, participation bias, partitioning strategy, Patrick Haffner, perceptron, performance, perplexity, pipeline, policy, pooling, population, positive class, post-processing, powerful, PR AUC (area under the PR curve), pre-trained model, precision, precision-recall curve, prediction, prediction bias, predictive applications, predictive parity, predictive rate parity, predicts, premade Estimator, preprocessing, principles, prior belief, procedural, process data, Prof. Karine Nahon, Professor of Department of Computer Science and Operations Research, Professors, PropBank, proxy (sensitive attributes), proxy labels, public accountability, public sector, public sectors, Q-function, Q-learning, quantile, quantile bucketing, quantization, Quebec’s AI Institute, queue, Radhika Nagpal, random forest, random policy, rank (ordinality), rank (Tensor), rater, re-ranking, real world, real-world domains, reason, recall, recommendation system, reconstruction, Rectified Linear Unit (ReLU), recurrent neural network, Registration, regression, regression model, regularization, regularization rate, reinforcement learning, reinforcement learning (RL), replay buffer, reporting bias, representation, research and application, research innovations, research track, researchers, respect, Responsibility, results, Retail, return, reward, ridge regularization, RNN, Robot rights, robotics, ROC (receiver operating characteristic) Curve, root directory, Root Mean Squared Error (RMSE), Root Robotics Company, rotational invariance, sampling bias, Satisfying certain basic universal needs, SavedModel, Saver, scalar, scaling, Scientific Director, scikit-learn, scoring, selection bias, semi-supervised learning, sensitive attribute, sequence model, services, serving, session (tf.session), shape (Tensor), sigmoid function, significant capital, similarity measure, size invariance, sketching, society, softmax, sparse feature, sparse representation, sparse vector, sparsity, sparsity/regularization (Ftrl), spatial pooling, sponsorship, squared hinge loss, squared loss, State, state-action value function, state-of-the-art, static model, stationarity, step, step size, stochastic gradient descent (SGD), stride, structural risk minimization (SRM), subsampling, summary, supervised learning, supervised machine learning, support vector machines, synthetic feature, system, Systems for ML, tabular Q-learning, target, target network, technical assistants, technical presentations, TECHNOLOGY, temporal data, Tensor, Tensor Processing Unit (TPU), Tensor rank, Tensor shape, Tensor size, TensorBoard, TensorFlow, TensorFlow Playground, TensorFlow Serving, termination condition, test set, tf.Example, tf.keras, The Association for the Advancement of Artificial Intelligence, The conference, The Institute for Data Valorization, the need of people in AI-based systems, the rights of the individual, The Summit, their benefits, Thirty-Fourth AAAI Conference on Artificial Intelligence, Thomas Schiex, Threat to human dignity, time series analysis, timestep, to benefit, topics, tower, TPU, TPU chip, TPU device, TPU master, TPU node, TPU Pod, TPU resource, TPU slice, TPU type, TPU worker, training, training set, trajectory, transfer learning, translational invariance, transparency, trigram, true negative (TN), true positive (TP), true positive rate (TPR), Turing Award Winners, tutorial, unawareness (to a sensitive attribute), underfitting, Université de Montréal, unlabeled example, unsupervised learning, unsupervised machine learning, update frequency, upweighting, user matrix, validation, validation set, vanishing gradient problem, VerbNet, Wasserstein loss, Weaponization of AI, weight, Weighted Alternating Least Squares (WALS), wide model, width, William T. Freeman, Yann LeCun, Yann LeCun and Yoshua Bengio, Yitzhak Ben-Israel, Yoshua Bengio
Related posts
-
CVPR 2020 • FATE in Computer Vision
Tutorial on Fairness Accountability Transparency and Ethics in Computer VisionOverviewComputer vision has ceased to be a purely academic endeavor. From law enforcement, to border control,... -
CVPR 2020 • Learning 3D Generative Models Workshop
Deep Declarative NetworksIntroductionThe past several years have seen an explosion of interest in generative modeling: unsupervised models which learn to synthesize new elements from the...
I really like and appreciate your blog post.
[url=https://paydayloansopt.com/]loan over the phone[/url] [url=https://coraloans.com/]loans direct lenders[/url] [url=https://waltlending.com/]payday loans store[/url] [url=https://cashadvs.com/]short loans[/url] [url=https://ppdcash.com/]debt consolidation loans for fair credit[/url] [url=https://sofaloans.com/]online cash[/url] [url=https://aroloans.com/]fast easy cash[/url]