Artificial Intelligence
Our vision of Artificial Intelligence was fairly simple. AI Systems must be able to perform efficient inference and tractable learning. We therefore defined two thrust areas: Inference and learning.
Parallel Inference Research
- Parallel or Resource Limited Inference with Logical or Bayesian Network Representations
- Parallel Matching or memory based inference
- Parallel Inference in Logic Programs and Constraint Networks
We co-developed one of the earliest parallel deductive database systems (with Jack Minker, Madhur Kohli and others).
We derived efficient (sometimes theoretically optimal) algorithms for both matching and inference and in some cases proved lower bounds.
We also made an early proposal for Probabilistic Databases where the Database is able to perform inference in logarithmic time of the size of the stored representation (with Judea Pearl, Adam Grove and Arthur Delcher).
We worked on a broad range of problems in learning focusing on either delivering widely used systems or non-standard learning formalisms.
Some examples of this work include:
- One of the earliest (1990) theoretical frameworks for learning with limited memory where efficiency is measured by the number of passes through the data the algorithm (system) must take to learn a simple concept or discover patterns in a data stream. This area is now called Data Streaming and has hundreds of papers.
- An early attempt to formalize Probabilistic Databases based on Bayes networks.
- An early version of Learning with a Helpful Teacher where the teacher is trying to teach a computer a concept by giving it examples it should learn from.
- A very popular decision tree system OC1 that focused on efficiency of implementation and a very early use of randomization in decision tree induction (prior to Random Forests).
- A new theoretical framework of Learning Subgraphs with Queries.
- Applications to Computational Biology, Systems Biology and Bioinformatics
- Early on (1996) we tried to shift the focus of Machine Learning to Learning Complex Behaviors. We organized a AAAI symposium (with Stuart Russell at Berkeley) on this topic. Today this area finally became a central area in Machine Learning and has many names including: Deep Learning and Learning Representations.