Monday, December 12, 2016

Thoughts on that "Deep Learning" textbook


"Machine learning is essentially a form of applied statistics with increased emphasis on the use of computers to statistically estimate complicated functions and a decreased emphasis on proving confidence intervals around these functions; we therefore present the two central approaches to statistics: frequentist estimators and Bayesian inference.

"Most machine learning algorithms can be divided into the categories of supervised learning and unsupervised learning; we describe these categories and give some examples of simple learning algorithms from each category.

"Most deep learning algorithms are based on an optimization algorithm called stochastic gradient descent."
From "Deep Learning", Chapter 4, page 98.

I've now skim-read the PDF version. It's plainly an engineering text - plenty of detail for practitioners. As a consequence, it's hard to see the wood for the trees. Machine Learning systems are currently super-sophisticated classifiers under some carefully chosen optimisation criteria (least squares estimation is one of the simplest).

However, many problems can be put into this form.

I'm still thinking that the only way we're going to understand higher brain functions is by building models in an experimental way. The philosophers in their armchairs have conspicuously failed to deliver on 'the hard problem'.

The key to this is the ability to build and connect enough artificial neuron-type components (Intel has a new chip). A new slogan is called for: 'In the neural capacity lies the consciousness'.

That, and some clever architecture and design ideas - we seem to have no shortage of smart people flocking into this new discipline.

Schrödinger's advice to his physics doctoral candidates: 'Learn more maths!'.

---

More: "The major advancements in Deep Learning in 2016" via here.

3 comments:

  1. 'In the neural capacity lies the consciousness'

    The question still stands as to whether there needs to be a "hardware" component to improved AI rather than just Software based algorithms. Maybe we should view some of the Neural Net theory with a broader perspective.
    Dont forget Quantum Computation...

    ReplyDelete
    Replies
    1. I don't doubt that an arbitrary biological neuron can be simulated in principle by a digital system (sampling theory) but even if not so, we could build electronic *analogue" devices.

      Don't get me started on AI based on cognitive DNA hacking!

      Don't get the reference to QC - no microtubules here!

      Delete
    2. No microtubules, but a generalised such QC model might help. I am investigating a new development in Logic since the GOFAI days (see a later Note) to see if it has any relevance - hence some slightly cryptic comments.

      Delete

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.