Phil Weber completed his Ph.D. at the School of Computer Science at The University of Birmingham, and as of November 2013 is awaiting his Viva. From 2008-2009 he took the MSc Advanced Computer Science, and graduated with distinction in December 2009. Before that he worked in industry for a number of years following his B.Sc Computer Science at Loughborough University of Technology.
During that gap Phil worked for ICL (now Fujitsu), Dixons Stores Group and the Egg bank on systems integration, consultancy, design, development and implementation, data migration, and latterly UNIX and storage systems administration (specialising in backups and data transfers). He is a Member of the BCS "The Chartered Institute for IT" (MBCS) and a was awarded Chartered IT Professional (CITP) status based on experience and peer recommendation.
Phil is married with three children, and enjoys running (cross-country and fell-running when time permits), orienteering (lapsed KIMM-er and OMM-er) and mountainbiking (one Polaris Challenge, would like to do more). He also enjoys reading, drawing and painting, and is involved in his local church.
Phil is loosely interested in all aspects of the "data->information->knowledge->'wisdom' lifecycle", information extraction, storage and retrieval.
In practice this means machine learning and data mining. In particular, his interests now lie in rather than answering specific questions from data, more in building models of the underlying phenomena that gave rise to the data. Machine learning is partly about this - using the data as evidence to draw conclusions about the "real world" and build useful models of it. With respect to speech recognition, arguably a deeper understanding of the physical processes that generate speech is vital for designing better algorithms for automatic speech recognition from the available evidence such as audio signals.
Phil's Ph.D. was focussed on Process Mining. Specifically, developing a probabilistic framework for the analysis and comparison of process mining algorithms. How do different algorithms learn? How much data should be used? What does "noise" mean in this context and how should algorithms handle it? What happens when the process evolves over time? How can the process model be made more general or easier to understand, in a principled manner? The framework developed by Phil and his supervisors Dr. Behzad Bordbar and Dr. Peter Tiňo could provide the basis for objectively answering some of these questions.