Monday, May 31, 2010
In my own version of the 'Pygmalion Project' I bought for Adrian Milan Kundera's "The Unbearable Lightness of Being" which I first read maybe 20 years ago. The novel is suffused with Nietzsche's concept of "eternal recurrence" which seems to have entered late nineteenth century cultural thought from then-contemporary physics: Ludwig Boltzmann was developing statistical mechanics which had the notion of infinite recurrence (in a finite universe) built-in.
So what is "eternal recurrence"? All quotes from the Wikipedia article.
"Eternal return (also known as "eternal recurrence") is a concept which posits that the universe has been recurring, and will continue to recur in a self-similar form an infinite number of times. The concept initially inherent in Indian philosophy was later found in ancient Egypt, and was subsequently taken up by the Pythagoreans and Stoics. With the decline of antiquity and the spread of Christianity, the concept fell into disuse, though Friedrich Nietzsche resurrected it on the grounds that it provides a reason for affirming life after the decline of theism."
The physics is captured in this quote.
"Related to the concept of eternal return is the Poincaré recurrence theorem in mathematics. It states that a system whose dynamics are volume-preserving and which is confined to a finite spatial volume will, after a sufficiently long time, return to an arbitrarily small neighborhood of its initial state. It should be noted that "a sufficiently long time" could be much longer than the predicted lifetime of the universe."
While Adrian is hopefully getting down with Milan, I'm reading Sean Carroll's book about time "From Eternity to Here" which is pre-occupied with similar matters: entropy and the reason for the 'arrow of time'.
Alex is here too this bank-holiday weekend and we were reminiscing about David Mace's military SF novel "Firelance", which describes a nuclear-hardened battleship in a post-nuclear war apocalypse. The dilemma of the novel is whether the warship should complete the destruction of the biosphere by unleashing its cargo of nuclear cruise missiles in a final counterforce strike against Russia. The Russians are understandably keen to destroy the ship before this mission can be accomplished. The story starts grim and relentlessly gets grimmer.
Sadly the book is out-of-print but Amazon points to resellers and a cheap, used but good quality version is winging its way to me as I write.
We're still living in a building site.
Wednesday, May 26, 2010
"This is a revised and more "user-friendly" version of [Seel, 90]* which wrote up my Ph.D. work for publication. The latter has more pictures, and its slightly greater austerity may please the formalist. Still all the math is still here, it's just more clearly explained!...
The problem we're looking at is simply stated. Take an example: we can look at intelligent robots operating in trial-and-error mode in a local environment, and accurately describe their behaviour using sentences like "look, it didn't know there was a hole there, that's why it fell over", and "now it knows where the power supply is, it will find a direct route."
In the jargon, this use of words like "knows", along with words like "believes", "wants" to describe and predict the behaviour of agents is called intentional description. It's so natural that it's not even obvious why it should be problematic. But the designers and engineers who constructed the robot have a different story as to how it behaves. They understand the causal mechanisms underpinning its perceptual, information-storage and problem-solving abilities. They can also predict (and alter and fix) the robot's behaviour.
Engineers normally have little time for intentional descriptions, considering them as so much sentimentality and anthropomorphism. Also, they don't see how intentional descriptions could work - the casual observer doesn't, after all, know the engineering. Still, intentional descriptions do work - we use them all the time and not just for robots.
To see how it all fits together, we need to use mathematical models: both for robots and their environments (for which we use automata-type formalisms) and for intentional language (for which we use logic, namely the special modal logics of knowledge, belief and goals).
It then turns out that the relationship between the two kinds of description, intentional and engineering, comes out in the maths as a semantic relationship between formulae in the logic and the appropriate automata-structured model spaces of the logic.
To properly get the details of what follows, you need to know basic propositional modal logic as taught in introductory AI or logic courses. Despite the jargon, the paper is more interested in conceptual clarity than the use of deep or intricate mathematical techniques (in fact I don't use any). If you read the words and avoid the formulae, you still get the gist of what's going on.
A shorter alternative to reading the rest of this paper is to look at (Seel, 1991) at http://interweave-consulting.blogspot.com/2010/05/logical-omniscience-of-reactive-systems.html."
* Seel, N. R. (1990). Intentional Description of Reactive Systems. In Y. Demazeau & J-P. Muller (Eds.), Proceedings of the Second European Workshop on Modelling Autonomous Agents in a Multi-Agent World. Elsevier Science Publishers B.V./North-Holland.
This is the main technical presentation of my Ph.D work in an accessible form.
In 2007 we also drove down the west coast of France, basing ourselves in a camp-site in the Dordogne. Here's the diary of what was an eventful holiday.
I used to have these PDFs on my website but since letting that go I've been at a loss as to how to post documents. Luckily with the move to Google's gmail, a document-storage facility came as well. Thanks Google.
When my sister, who's currently driving from Spain to France with her husband read these accounts she accused me of being 'Mr Grumpy'. My only defence is that it's so much more amusing to read about the things which go wrong.
Sunday, May 23, 2010
Saturday, May 22, 2010
Our driveway resembles not so much the early universe during the period of exponential inflation as the moon during the period of intense asteroid bombardment. So recalcitrant is the concrete of the driveway that the tread came off the digger. Luckily our cat, Shadow, was on hand to lend some advice to the builders. Observe how he patiently waits for them to return with the correct tools.
Here's a bonus recently unearthed from the archive (below). Alex and Adrian in Florence in the nineteen eighties. I was working at STL on an IT project with Olivetti - some ESPRIT thing or other - and we had decided to drive down to Italy and make a holiday of it. I was working and Clare took the kids to see the culture.
Sadly in all my zooming about I have never yet got around to visiting Florence myself.
Wednesday, May 19, 2010
The problem of 'logical omniscience' continues to engage attention in AI. In this paper it is argued that the problem arises from interpreting formal properties of epistemic logics into agent performance domains, where agent cognition is taken to incorporate formula-manipulation, which then mirrors inference in the logic. The case of Reactive Systems is examined, where some such systems eschew such formula-manipulation in favour of fast-acting mechanisms. A formal account of such agents' knowledge states can still be given, but the 'problem' of logical omniscience re-emerges in a new light.
This paper is published as:- N. R.Seel (1991). The 'Logical Omniscience' of Reactive Systems. Proceedings of the Eighth Conference of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour, U K (1991). Leeds U.K.
The phenomenon of 'logical omniscience' is a technical one, which occurs within certain formal systems which attempt to capture notions of 'knowledge’. The notion of logical omniscience is orthogonal to distinctions between Knowledge and Belief:- the concepts of 'knowledge' and 'belief' intermix freely in this paper.
It is usual to formalise the notion that an agent 'knows p' where p is some proposition, by defining an operator such as 'K' and writing syntactically: 'Kp'. K cannot be an ordinary prepositional operator such as ‘not', because it is not truth functional (e.g. if p is contingently true, then nothing special can be said of the truth value of Kp).
The standard approach is to treat K as a modal operator, and to treat the analysis of epistemics as a particular interpretation of standard modal logic with possible worlds semantics. A story is then told as follows. Agents are not omniscient about the contingent facts of the world they inhabit: there are many possible ways the world could be, that an agent cannot distinguish between.
Thus in each situation (world) we associate a number of 'accessible worlds' with the agent, capturing the space of possibilities within which the agent considers its actual situation to be. The things an agent is sure about reflect commonalities between these worlds; the things an agent is agnostic about take distinct forms in different worlds. Note that the agent's lack of knowledge prevents it from identifying which of the possible worlds is to be considered actual, while if it has incorrect beliefs, then none of the accessible worlds will be the real one.
In such a semantics, the following axiom, called 'K', is valid:
K. Kp ^ K(p -> q) -> Kq
Axiom 'K' is valid because in every world in which p and (p -> q) hold, q has to hold too, by the meaning of the implies operator (Chellas, 1980, p. 7), and the notion of satisfaction relation used.
In a similar fashion, logically necessary formulae, which could not be false, must be true in every possible world, and therefore must be 'known'. So we have the inference rule called the Rule of Necessitation (RN), (Chellas, ibid, p. 14):
To sum up, on the standard modal logic account axiom K is valid and inference rule RN is sound.
These facts, not necessarily controversial in the context of modal logic, become problematic when epistemic logic is interpreted into the real-world domain of agents which are said to know things. It seems to follow that such an agent knows all the consequences of the things it knows (by axiom 'K' and modus ponens), and knows all necessary truths (presumably including all of mathematics) by inference rule RN.
The trouble is, these conclusions are felt to violate our intuitions about how knowledge and belief actually work (in people). More precisely, our intuitions encompass a performance model of knowing and believing, in which people, when asked, fail to give satisfactory answers where the theory would suggest they ought: (e.g. "Is 'Fermat's Last Theorem' actually a theorem of Peano Arithmetic?").
In this paper I first survey recent attempts to address the 'problem' of logical omniscience. This is followed by a discussion which focuses more on agents 'in the world' - reactive systems, and in particular what it means for a reactive system to know something. Finally, I discuss whether considering situated agent-environment interaction as the primary phenomenon might shed new light on the 'problem' of logical omniscience: for example, by establishing it as an artifactual problem.
2. THE PROBLEM OF LOGICAL OMNISCIENCE
As Hintikka (1962) observed, it is possible to 'get round' the problem of logical omniscience by assuming that the K operator does not refer to what is explicitly believed by a person, but instead refers to what is implicit in those beliefs - what would have to be the case if those beliefs were true. Another proposed interpretation is that we are dealing with 'idealised reasoners', which in some sense do not suffer from ordinary limitations.
Even if these suggestions were acceptable, there still remains the problem of dealing with agents which are presumed not to believe all the consequences of their beliefs, and not all necessary truths. What is the nature of the assumed fallibility of such agents? Four limitations have been discussed in the literature, (Levesque, 1984; Fagin & Halpern 1985, 1988; Konolige, 1986).
1. The agent lacks awareness of a concept (syntactically, the predicate naming the concept is 'missing' in some sense from the agent's lexicon, or semantically, it lacks a valuation).
2. The agent undertakes some actions equivalent to theorem-proving - by applying transformation rules to tokens, but is resource-bounded, so that tokens requiring more than some number of applications of the rules are not derived.
3. The agent is as (2), but some transformation rules which are needed for completeness are contingently omitted.
4. The agent maintains a number of contexts for reasoning, which are applied in different situations. These contexts need not be the same, or even consistent with each other.
Assuming we wish to model some or all of these epistemic fallibilities, the strategy runs as follows. First we introduce a new operator, B say, which aims to capture explicit belief - the beliefs actually attributable to the epistemically fallible agent. The kinds of beliefs which are closed under logical consequence, include all valid formulae etc are then called implicit beliefs, and given a different operator, say .
Secondly, some axioms are then given which characterise the nature of the agent's epistemic limitations: for example we might want to make
(1) -B(p v -p)
satisfiable, where a necessary truth is not believed in the case where the agent has never heard of p.
(2) Bp ^ B(p -> q) ^ -Bq
may be satisfiable, showing a failure to draw logical consequences.
Finally, some mathematical structures are derived which provide a suitable class of models for the proposed logic, and which make the axioms come out valid. This last step may be more or less difficult.
An early example of this procedure was Levesque's use of partial and incoherent worlds, called situations after the similar constructions in Situation Theory, (Barwise & Perry, 1983). Partial situations do not provide valuations for all the sentence letters of the language of the logic, which permits lack of awareness to be modelled as in (1) above; incoherent situations permit simultaneously supported contradictory valuations of sentence letters, resulting in the failure of closure of explicit belief under implication as in (2) above.
Although it appears that there is nothing wrong with Levesque's treatment from a formal point of view, there are some questions as to whether it adequately addresses the problems it sets itself. Thus as Vardi (1986) points out, Levesque's agents are perfect reasoners in the constrained framework of relevance logic: "Unfortunately, it does not seem that agents can reason perfectly in relevance logic any more than in classical logic" (p. 294). The esoteric nature of incoherent worlds also raises some problems. In interpreting the semantics into the actual world, what could they be?
In (Fagin & Halpern, 1985, 1988) a number of logics are presented which attempt to model one or other of the epistemic fallibilities mentioned above, while avoiding Levesque's difficulties. Levesque's logic is re-engineered into a 'Logic of Awareness', which restores the general setting of total possible worlds semantics (abandoning partial and incoherent situations), but associates with each world an (agent-indexed) 'awareness set' of prepositional tokens which the agent is aware of at that world. Under the semantics given, all of Levesque's axioms are valid.
By extending the 'awareness set' to formulae in general (not just prepositional tokens), properties of Levesque's operators such as failure of consequential closure can be reproduced, and by choosing the contents of the awareness sets' with care (by computability properties), resource-bounded reasoning can be modelled.
As Konolige (1986) observes, however, the superposition of a basic possible worlds model together with arbitrary syntactic restrictions at worlds leads to a rather clumsy hybrid system. The system is neither elegant, not does it provide good support for intuitions about resource-bounded reasoning. In particular, the technical apparatus can be re-expressed more coherently in purely sentential terms.
Fagin and Halpern's final logic supports a notion of 'local reasoning'. Here, each agent in a given state is associated with a collection of sets of possible worlds. Each of these sets constitutes a 'context of belief'. Hence the satisfiability of (2) above can follow from the agent believing p in one context, p -> q in another, but never putting these two contexts together to believe q.
To sum up, these approaches to handling the 'logical omniscience' problem all exhibit a common theme: a movement from (1) to (3) thus.
(1) The identification of a concept of 'knowledge' or 'belief', abstracted from human affairs.
(2) A formalisation of the concept by operators, (K's or B's), which take the concept, as a conceptual category, to be unproblematic. This is then followed by the exploration of its properties through various candidate axiomatisations as we have seen above.
(3) The search for a simple, elegant mathematical structure capable of providing a semantics for the axiomatisation. In other words:
Philosophical analysis identifies concept of 'knowledge' =>
Concept then formalised as a logical operator: properties exiomatised =>
Esoteric structures then developed to make axioms valid.
Looking at reactive systems forces us to break with this reified view of 'knowledge' as a thing in itself, and restores the primacy of the situated epistemic subject engaged in practical relationships with its environment.
3. REACTIVE SYSTEMS
In most areas of AI, it is impossible to define the key concepts (e.g. knowledge-based system, blackboard architecture) with any precision. Things are no different with reactive systems, as can be seen from this preliminary attempt at definition, adapted from (Pnuelli, 1986).
Definition: Reactive System
A reactive system is an entity which interacts in a systematic way with its environment.
This however fails to capture the main intuitions of workers in this area: (e.g. Brooks, 1986; Kaelbling, 1986; Agre & Chapman, 1987) that:
- the environment is taken to be complex and hard to perceive,
- the environment is unpredictable in the large, and continually sets novel contexts for the system to which it has to respond,
and perhaps the most important intuition:
- the environment requires (often, always?) a very fast response from the system.
Designers of reactive systems have therefore emphasised:
- rapid response procedures, which take input (in some more or less 'raw' form) and optionally, state-information maintained by the system; and then return output (in some more or less 'raw' form) and optionally updated-state;
- a taxonomisation of required agent behaviours: most important => least important, which can be factored into concurrent task-achieving modules in the system architecture, plus some scheduling mechanism (cf Brooks, Kaelbling ibid).
Designers have de-emphasised/rejected suggestions that reactive systems should support explicit world-models + declarative reasoning about world and agent behaviour, in favour of any mechanisms which will permit the agent to respond appropriately (including fast enough) in a situated fashion. So although in principle a reactive system (according to the definition) could maintain a 'formulae database' world-model as a causal factor in its behaviour, it need not, and in what follows I assume it does not.
Now, if the notion of an agent's 'having knowledge' crucially depends upon the existence of such a formulae database, then one is in some epistemic trouble with reactive systems. But perhaps epistemic notions are not, after all, architectural, but are instead a fine-grained way of describing agent behaviour?
In (Seel, 1989) 1 described in detail a reactive agent in a Skinner box environment, and proved some properties of it using a temporal logic. In (Seel, 1990 a, b) I extended the logic with an epistemic operator, and showed how it captured fine-grained agent behaviour in the environment, such as making mistakes, and learning. I will briefly restate the approach to the epistemic modelling of reactive systems outlined there, and then consider the implications for the 'logical omniscience' of reactive systems.
4. HOW CAN A REACTIVE SYSTEM KNOW SOMETHING?
I will summarise the epistemic treatment of reactive systems in nine theses.
1. Consider the agent as an automaton-object (not necessarily finite-state). Consider the other entities constituting the agent's environment as also automata-objects (not necessarily finite-state). Let a collection comprising an agent plus the other objects constituting its environment, each in a definite state, be called a scene. Agent and environment objects are deterministic. Other objects could also be considered as agents.
2. Scenes can be executed, so that as each object's next state function is applied, each object in the scene updates itself to its next state: collectively the next scene is constructed. Call the infinite sequence: [initial-scene; next scene; one after that; ... ] a history. Note that the behaviour of objects defined in an initial scene is entirely determinate - the notion of history is well-defined.
3. Each object has both private and public state. An object cannot access another object's private state (it can access its own private state + other objects' external states as part of its own next-state computation). Assume an observer who looks at scenes from 'outside'. The observer also can only 'see' the objects' external states. This means that on the information available to the observer, the evolution of scenes as a history unfolds may not be determinate, due to the invisibility of private state in the objects.
4. The behaviour of the agent and environment is not arbitrary but is constrained by rules. These rules have a conditional and temporal character:
- if such and such has happened then subsequently the environment will behave thus;
- if such and such has happened then subsequently the agent will behave thus.
These rules state conditions on the external states of objects, they are assumed to be known to both the agent/environment designer and to the observer. Conditionality imposes on the agent designer that the agent must determine at 'run time' the specifics of the rules which actually hold. The resulting learning task provides the conditions for non-trivial epistemic description of the agent.
5. The observer starts by looking at an initial scene, scene zero. There are many possible histories which would look the same to the observer at scene zero, (differing in objects' private states, and next state functions). The observer collects these together and calls them the possible histories at scene zero. As the actual history unfolds, scene by scene, perfectly definite events occur, registered in the changing external states of the objects. The observer prunes from the possible histories all the ones which differ in their early scenes from what has actually happened in the history being watched.
As the actual history unfolds, and evidence accumulates, certain of the rules have their antecedents satisfied, 'kick in' and constrain the pattern of future events. All the histories which don't fit these patterns get removed from the possible histories as well.
6. Suppose we have now reached scene n of the actual history. As the observer looks at the possible histories, it may be the case that they all agree on what the agent (or environment) will do in scene (n+l) - (and maybe (n+2), (n+3) etc). Since the actual history is certainly one of the possible histories, then it is predictable what the agent (or environment) will do next.
7. If the different histories in the space of possible histories tell different stories about what the agent (or environment) will do next, then the observer cannot predict exactly what will happen next, only that some choice from amongst those in the possible histories will be found to occur in the actual history.
8. Since the agent itself may have no more information than the observer (recall the designer's ignorance mentioned above), then it may also find itself in an informational state as in (6), in which case it may be said to act appropriately, or (7) in which case it may, by "guessing wrong", be said to act in error.
9. Given a certain adequacy of the rules (thought of as specifications), there may come a point where in certain types of scene, the possible histories always agree on what the agent should do next. If that was not previously the case, the observer may say that the agent has learned to behave appropriately in these scenes.
Now, all this may seem a creative use of conditional rules to compensate for lack of information about private states on the part of the observer, and so it is. But when it is formalised properly, the changing structure of possible histories over 'time' turns out to replicate the Kripke semantics underlying a logic of knowledge and time.
So if we wish, we can stop talking about possible histories, and instead introduce talk about the agent's knowledge. Since the semantics is in place (it induces a KT45 axiomatisation for the knowledge operator) we can prove theorems about perception, knowledge and action. See (Seel 1990, a, b) for details of the formalisation, proofs and discussion.
Needless to say, the agent itself need have none of this: it only needs to change some internal structures to align its response behaviour correctly to the incoming stimuli from the environment, in the best reactive tradition.
5. REVISITING LOGICAL OMNISCIENCE
Notice how the above treatment dramatically modifies the way the problem of 'logical omniscience' is posed. Unlike in figure 1 above, we are no longer trying to exemplify some abstract, platonic notion of 'pure knowledge'. Such conceptual analysis is replaced by an attempt to reason correctly about a highly situated agent-environment interaction thus:
- Situated agent-environment interaction, developing in “time”. Behaviour conforms to public rules, but implemented using private mechanisms.
- Observer reasons in epistemic logic, based on public rules to compensate for lack of (private)mechanism information
So what are the properties of the formal epistemic theory which, generated by the observer, is attributed to the agent? The most obvious feature is the restriction of the lexicon of the logic to the naming of the interaction-relevant events in the 'life' of the agent. There is no attempt to start talking about number theory or stock-exchange prices. (And why should there be? These are problems within some localised regions of human social life - another, but different concrete situation). Hence the notion of 'limited awareness' here arises as a natural consequence of the specificity of the agent-environment mathematical model.
As regards the other issues flagged as problems above, (resource-boundedness, consequential closure and the knowing of all valid formulae), it is certainly true that the observer's epistemic theory has axiom 'K' as valid, and supports the Rule of Necessitation. But this is beside the point because there are no agent-performance implications to these facts.
The agent is not concerned because it is not computing with the logic, merely being a reactive mechanism.
Resource-boundedness may be a problem for the observer reasoning in the logic, as it is for all users of formalised theories, but not for the agent, which merely acts. In some sense, the logic ‘makes contact' with the agent at perception and action: in between, the observer may undertake arbitrarily long chains of reasoning, and deduce all kinds of valid formulae; the agent simply operates as a mechanism according to its design, using its private state and various observable public states to produce its next output.
To sum up, the agent (in its environment) satisfies the logic, but doesn't use it. Hence for the agent, the proof-theoretic properties of the logic which are called 'logical omniscience' (and which get dressed up with all kinds of performance implications to worry us), simply don't apply.
The reader is likely to be left uneasy by this conclusion. "Sure, you can cut the problem down to size for the dumb beasts, which is what reactive systems really are, but it's cheating - the problem is really meant to be people !"
Well, no and yes. It is an achievement to base a concept of knowledge on a thoroughly behavioural analysis of agent behaviour in an environment - it's not cheating. And yes, the real problem is people (and not logic !).
This suggests that a more compelling resolution of the classical difficulties will emerge from formalised sociological theories, modelling the situated formation of social constructs such as mathematics by human communities. Difficult as this may seem, it holds out more hope of success than those approaches which reify the problem as one of logic design.
The problem of 'logical omniscience' in its traditional guise was outlined, and some recent attempts to address it were surveyed. Attention was then focused on the agents themselves which traditionally are meant to suffer the 'problem'.
In the case of reactive systems, it was argued that a perfectly coherent behaviour-based notion of agent-knowledge is possible, without any assumption that the agent has to conduct logical reasoning of any kind.
In this case, the problem of logical omniscience, which at root is a performance problem, simply vanishes as a real problem. It is suggested that the task of adequately modelling human 'beliefs' and cultural accomplishments, such as mathematics and the formalised sciences, belongs more properly to yet-to-be-developed formal social theories rather than logic per se.
Although disagreeing with their conclusions, I found (Reichgelt & Shadbolt, 1990) a useful source of ideas.
Agre, P. E. & Chapman, D. (1987). Pengi: an implementation of a theory of activity. In Proceedings of the Sixth National Conference on Artificial Intelligence, pp 268-272.
Barwise, J. & Perry, J. (1983). Situations and Attitudes. Bradford Books/MIT Press.
Brooks, R. A. (1986). A Robust Layered Control System for a Mobile Robot. IEEE Journal of Robotics and Automation, Volume RA-2, Number 1, pp 14-23.
Chellas, B. (1980). Modal Logic: An Introduction. Cambridge University Press.
Fagin, R. & Halpern, J.Y, (1985). Belief, Awareness, and Limited Reasoning:
Preliminary Report. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence, (IICAI-85) , pp 491-501.
Fagin, R. & Halpern, J. Y (1988). Belief, Awareness and Limited Reasoning. In Artificial Intelligence 34.
Hintikka, J. (1962). Knowledge and Belief. Cornell University Press.
Kaelbling, L. P. (1986). An Architecture for Intelligent Reactive Systems. In Georgeff M. P. & Lansky A. L. (Eds.), Reasoning about Actions and Plans. Morgan Kaufmann.
Konolige, K. (1986). What Awareness Isn't: A Sentential View of Implicit and Explicit Belief. In J. Y. Halpern (Ed.), Theoretical Aspects of Reasoning About Knowledge: Proceedings of the 1986 Conference. Morgan Kaufmann.
Levesque, H. J. (1984). A Logic of Implicit and Explicit Belief. In Proceedings of the National Conference on Artificial Intelligence, (AAAI-84), pp 198-202.
Mendelson, E. (1987). Introduction to Mathematical Logic. Wadsworth & Brooks.
Pnuelli, A. (1986). Specification and Development of Reactive Systems. Information processing 86 (IFIP). Elsevier Science.
Reichgelt, H. & Shadbolt, N. R. (1990). Logical Omniscience as a Control Problem. Paper, Department of Psychology, Nottingham University.
Seel, N. R. (1989). A Logic for Reactive System Design. Proceedings of the Seventh Conference of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour, pp 201-211.
Seel, N. R. (1990a). Intentional Description of Reactive Systems. In Y. Demazeau & J-P. Muller (Eds.), Proceedings of the Second European Workshop on Modelling Autonomous Agents in a Multi-Agent World. Elsevier Science.
Seel, N. R. (1990b). Formalising First-Order Intentional Systems Theory. Ph.D. thesis available from author.
Vardi, M. (1986). On Epistemic Logic and Logical Omniscience. In J. Y. Halpern (Ed.), Theoretical Aspects of Reasoning About Knowledge: Proceedings of the 1986 Conference. Morgan Kaufmann.
Tuesday, May 18, 2010
The first is a snowboarding scene, no doubt with Adrian in mind; the second a long boat inspired by her OU archaeology course.
Click on either to make larger.
Wolftrap (Arts Centre)
Nigel with "Flames" in our Virginia home
Monday, May 17, 2010
Here's a picture of me being blasted by the on-shore wind, Flat Holm behind.
And here's a picture of the famous goats (one of them anyway). We bravely walked through a herd of them on the way back: they dutifully separated for us.
We're high and there's quite a view looking back towards Wells (you can't see it). Our car is down there somewhere.
Then we drove home and bought two gigantic helpings of fish 'n' chips.
Sunday, May 16, 2010
Someone at the time compared most AI to the contribution that making paper aeroplanes makes towards a theory of aerodynamics. The idea is that just because you can write a program to replicate some micro-behaviour doesn't necessarily mean you have any handle on understanding it.
Marvin Minsky was alleged to have said recently that AI had made no real progress since the 1970s. It's scary how little we know, we don't even know what the fundamental problem is, let alone how to solve it. The fundamental problem is not likely to be "intelligence" per se. So what has worked?
Designing and building systems which can execute a perceive-process-act cycle in fulfilment of arbitrary "missions" (including homeostasis) has been a major sub-project of AI. This research "thread" has had to tour such areas of unorthodoxy as neural nets and reactive systems but there have been major success, often in areas of interest to NASA and the military. "AI" has paradoxically been quite successful at solving computer science problems.
However, no-one has any idea how a physical system can experience pain, for example (affective, not cognitive). A dogma of AI is that if we had a mathematical theory of how an agent could experience pain, then a program instantiating such a theory would, when it ran as a computer process, actually experience pain. By hypothesis, such a computer process could be tortured (this is a plot device in the SF novel "The War in 2020" - Ralph Peters, 1991). I guess most of us just shake our heads at our lack of any intuition as to how that could work. Ditto for whatever we think occurs when we consider ourselves to be conscious entities. My childhood illusion that the human race has a basic handle on all the obvious problems continues to generate genuine visceral surprise that we still just don't know.
My methodology for my own AI research was always something like this: "what set of ecological problems 100,000 years ago in Africa created conditions which selected for the cognitive-affective solution which human beings exemplify", or if you like "what was the problem to which we are a solution". This poses the problem in a "requirements-implementation" paradigm, or more formally the paradigm of setting up an axiomatic theory and looking at instances of models which satisfy it (models which are architectural/automata-theoretic in character, making explicit the architectural principles underlying human psychology). Evolutionary psychology is a lot more popular now than it was back in the 1980s.
My Ph. D. research applied this methodology in the case of simpler autonomous agents, but I failed to get a handle on how to model a deeper level of complexity - something along the lines of problem-solving communicating agents, but nothing seemed very compelling, and then I had to give it up when the organisation I worked for (STL) was acquired by Nortel. A lot of people think that human psychology has a layered character, with the pre-human "unconscious" overlaid with more symbolic neo-cortical functions - the "triune brain".
I basically buy this, but I have yet to see formal models which either convincingly "reproduce the phenomena" such as the awfulness of pain, or ground such models in an ecological framework. I think that the Jungian theory of human psychological type has some interesting insights (see "Evolutionary Psychiatry" - Stevens & Price, 2001). However, any formalisation will be extremely challenging.
I joined the International Marxist Group youth organisation towards the end of my first year, and the IMG proper a little later. At that time, the IMG was the British Section of the Fourth International and Tariq Ali was one of the leaders. He was generally considered rather emotional and not a truly rigorous thinker, and the real leader was a guy called John Ross (until recently economics advisor to Ken Livingstone, the former Mayor of London!). We did the usual stuff: endless demos, lots of planning, selling the paper. I have a collection of "war stories", like the day we nearly occupied the Chilean Embassy.
I dropped out of the IMG (now the "International Socialist Group") when I was around 26. I guess the whole thing had ceased to be a novelty and somewhere I had lost momentum. I probably still accepted Trotskyism as a broadly correct social theory until my early thirties. The definitive end of that view was when the Soviet Union crumbled to capitalism. A leader writer in, I think, the Guardian, wrote that the fall of "communism" (Stalinism as we thought of it) would definitely sink Trotskyism too, as Trotskyism was sustained by its belief that a "workers state in transition to socialism" was the inevitable successor to Stalinism in Russia. Tariq Ali reported in his obituary of Ernest Mandel, the leader of the International, that Mandel saw the event too as postponing communism in the classical Marx-Lenin sense for "500 years".
I think Marxism impressed me (as it did a lot of people) because methodologically it analysed societal "structure" as routinised, stabilised patterns of human relationships. This just seemed a deeper analysis than the superficial model-building of "bourgeois social science". However, such an ethnographic model of society is no longer the sole ownership of Marxism. It turns out that the tradition of Marx, Lenin, Trotsky conflates a number of trends into one overarching scheme, which is empirically not correct. But it is wonderfully sophisticated and complex! I still recall "History and Class Consciousness" (Georg Lukacs, Merlin 1991) as one of the most inspiring books I ever read.
I have a familiar problem with politics. People take gratuitous moral stands. I tend to see the global human condition more as the dynamics of particularist social groupings, some of which are organised states, some more informal, each expressing their own interests in cooperation and/or conflict with the rest. None of the groups with real power and responsibility take a universalist view of their mission. Yet a partial view by its very nature devalues the interests and even the humanity of groups which "get in the way". This is the slippery slope which can lead to arbitrary bad outcomes for the "bad guys", and interest groups tend to be rather selective about which other groups they choose to demonise. The Nazis always make an easy target because they were pretty bad demonisers in their own right, and they're not around any more in any strength to argue back, or to be accommodated. Other bad things have happened since the 1940s.
It's easy to see why it's in the nature of specific power groups to ruthlessly pursue their own interests, cloaking their self-interest in a spurious cloak of universality. It's less easy to see what can effectively be done about it, except at the margins.
I went to Warwick University to become a theoretical physicist. I was completely up for mathematically hitting Quantum Mechanics and Relativity: cosmology was my great interest. Instead, I found myself measuring the heat loss of fluids running through pipes. I despaired, and became a refugee in Philosophy and Politics, joined the International Marxist Group and duly got chucked out of Warwick at the end of my second year.
When I was in the sixth form at Bristol Grammar doing Maths and Physics, my much-respected Maths teacher told us that mathematicians cared about the structure and integrity of the mathematics; physicists had a utilitarian, toolbox approach to maths. They just grabbed any techniques which got them the right answer, and never cared about whether they were violating the applicability constraints of the techniques - that is, if they even knew they existed!
I cared, because my motivation for studying physics was to understand how the universe worked and was put together. The mathematical structures were a proxy for the universe itself. A cookbook approach was hopeless, because it violated explanatory power in favour of a black-box preoccupation with getting the "right answers" via mathematical hacks. In particular, I had real problems with the usage of the real numbers.
Mathematical theories couched in the form of functions over R**4 (three spatial dimensions + a time dimension) "explained" the universe in terms of the dynamics of qualities referenced to a four-dimensional real coordinate system. (Quantum mechanics uses complex infinite dimensional configuration spaces, which does not at all alter the point following).
Such coordinate systems are ontologically prior to any phenomena. However, the universe is intrinsic - it doesn't depend on us existing (or methodologically shouldn't!). The coordinate systems are an artefact. So such theories could never model the real universe, but could only correlate observations made of it. My other concern was the denseness and completeness of the real numbers. Too many point actual infinities. This seemed unrealistic - space-time surely couldn't be like that!.
However, if you try to do calculus on restricted sets such as the Rationals (Q), it doesn't work, so I felt completely confused. I found it impossible to communicate these difficulties, and they did not appear in the literature back in the 1970s.
These days, however, I'm a little more sophisticated about mathematical physics, and recently restarted studying physics with the Open University. My objective is to spend as long as it takes to get my head around general relativity and quantum field theory, and to this end I've mapped out a study schedule as follows.
2008: electromagnetism (SMT359). Covers Maxwell's equations and some applications. (Done).
2009: the quantum world (SM358). Basic quantum mechanics. (Done).
Then a move to the maths MSc programme, concentrating on mathematical physics modules including:
- applicable differential geometry (M827). For general relativity.
- functional analysis (M826). The OU's nearest thing to a treatment of Hilbert spaces for quantum field theory.
One of the irritating things about the OU's maths MSc programme is that it's almost perversely unsuitable for students of quantum mechanics. I quote from a student review and faculty reply on the M826 website.
Student "I didn't enjoy this course at all, and spent most of it having no idea what was going on. I chose it because I'd read (in Gowers' "A Brief Introduction to Mathematics") that Hilbert Spaces are one of the most important things in mathematics. I now more or less know what one is but don't know why they're important. "
Faculty: "Perhaps the problem here is that M826 is not meant to be a course on 'Hilbert Spaces and its applications'. Indeed, most of the course concentrates on linear spaces that have quite a general topological structure. Only at the very end does the course focus its attention on Hilbert Spaces. Inevitably there is by then little time to do more than define a Hilbert Space, examine its structure (in cases where it is separable), and characterize its dual spaces.
"In particular no attempt is made to examine the rich structure of the spaces of linear operators acting on a Hilbert Space, nor is any attempt made to describe the many applications of such spaces (e.g. to quantum mechanics, statistical mechanics, optimisation, partial differential equations, etc.). Clearly anyone who studies the course hoping to learn all about Hilbert Spaces and its applications could be disappointed.
"The work in M826 is quite challenging and requires a good working knowledge of basic set theory, vector spaces, analysis and topological (mainly metric) spaces. It is also necessary to have an aptitude for following proofs and understanding how they relate to the result being proved. Anyone whose preference is for less abstract mathematics may find some of the work difficult to follow. "
Perhaps they will have had a change of heart by 2013.
After school I occasionally tried to stay in touch, but to do Judo requires a commitment to train hard and to go through the grading scheme. I could never make Judo the centre of my life. I tried Karate in the early eighties, when I started R&D work in Harlow at STL. But again, it requires extreme physical fitness, and I lacked an inner incentive to do that. More recently I studied T'ai Chi.
Beneath the surface, there is an intellectual continuity between computer science, IT, telecoms and strategic marketing which forms a path for my career. After my teenage obsession with theoretical physics, and my excursion into Marxist politics in my twenties, my thirties was the period when I discovered the wonders of theoretical computer science and artificial intelligence. This may sound weird, but the really fascinating areas of computer science for me are in programming language semantics, computational logic and mathematical logic itself (proof and model theory). This was my Ph.D. topic of course, as well as the subject of my completed first degree (with the Open University). So when I had a chance to do software research at Standard Telecommunications Laboratories in Harlow in 1982, I was over the moon - paradise!
Telecoms was what I did in my forties. I first learned about telecoms when STL was acquired by Bell-Northern Research (BNR) in 1990 (BNR was the R&D arm of Northern Telecom and Bell Canada). From computer science I went through a whole new learning curve looking at the design of SDH and fibre systems, Frame and ATM networks, then voice and IN network design, and finally IP network architecture, design and engineering. This was high-touch consultancy, leveraging the vast experience of BNR in North America to advise new carriers in Eastern and Western Europe how to build modern networks. My first encounter with customer-facing consultancy and it worked for me.
Since leaving Nortel and setting up Interweave Consulting, I have had the experience of freelance consulting, which is a personal growth experience when it comes to personally carrying out sales and marketing!
I grew up in my teens with Asimov (Foundation series - the first three), Heinlein (Starship Troopers), Clarke (various). I still remember the yellow-jacketed Gollancz SF collection in my local library in Westbury-on-Trym, Bristol.
To be brutally honest, what works for me is a good plot, characters I can feel some empathy with and can believe in, something military and high-tech, and some completely awe-inspiring ideas. This is a multi-pass filter, and only a few folks get through. Greg Egan's Quarantine and Permutation City are particularly good. Dan Simmons's four Hyperion books are stupendous. Greg Bear's Eon and Eternity are well up in my "best ever" list.
Other people I read are Iain M. Banks for his awesome intelligence as well as literary excellence; Peter F. Hamilton for his well-crafted opera; Neal Stephenson of course, even for his baroque cycle :-) ; Paul J. McAuley for Fairyland; Richard Morgan for his extraordinary grasp of low-life ultra-violence in Altered Carbon and its sequels. I really rated Mindbridge by Joe Haldeman, and of course the first two or three of the Ender books by Orson Scott Card.
“That’s a coincidence,” replies the author, “when I retire, I’m thinking of taking up brain surgery.”
Confession. I have written a novel. The experience is interesting - you look at other people's work with a new eye, seeing the plot devices and the proximate reasons for introducing certain characters and events. You see the plumbing behind the literature. Everyone should try it and learn.
My novel is science-fiction - called "Exopsychology". The plot is that an asteroid has been set on a collision course with earth, and the hero, a psychologist, works on psychological warfare strategies against the hostile aliens. It got rejected by one agent, but that's not unusual. I didn't resubmit it because I was dissatisfied with it myself.
Writing is harder than it looks, as the joke at the beginning of this answer suggests. My problem was that I had lots of really interesting ideas I wanted to write about: alien motivation; the Fermi paradox; variant brain structures in aliens (alternatives to the triune brain); methods of psychological warfare; as well as the usual space-opera furniture of hi-tech warfare. These preoccupations sit uneasily with character-development and plot, unless handled with consummate skill. Plot is fundamental - without a good page-turning story the thing is doomed. Some authors' stories are woven from the magic of character alone: however, that's a stretch for the kind of people who write science-fiction. So in my own estimation, my story has poor characterisation and a plot which fails to smoothly build: too much interjection of 'really interesting ideas' which simply slow the pace down.
One day, when I have time, I will ruthlessly prune the ideas which don't support the plot, and then complete the plot development (which I was hoping to put into book two!) so that the story ends coherently.
My book "Business Strategies for the Next-Generation Network" was published in December 2006. I tried to ground the technical ideas in personal experiences, as amusing and interesting as possible, and to adopt an emotional tone: adjectives such as sardonic and scathing come to mind - there have been so many failures in telecoms. Please feel free to buy it!
What is the purpose of an animal (say a mouse in the wilderness)? It has a natural design which has been selected to keep it alive until it can successfully reproduce. That is all there is to it - no higher purpose can be detected. If the mouse had a choice, which it doesn't, we would advise it to act according to its nature, which is to do those things which maximally support its ability to be an ancestor.
What is the purpose of a human being? Like the mouse, we are constructed to a design which optimises our ability to be ancestors. However, we are social creatures, and, we cannot survive and be reproductively successful except in social contexts created by the social groups to which we are affiliated. These groups assign roles to us, and within these groups we are obligated to participate in role-negotiation, objective setting, planning and execution in a way which furthers the overall interests of the group and our own role within it.
This behavioural framework of social interdependence was presumably selected for when we lived in extended kin groups, and subsequently as we lived in social groups where we were non-related. Robert Trivers' theory of "reciprocal altruism" has a lot more on this, in "Natural Selection and Social Theory: Selected Papers of Robert L. Trivers" (2002).
Evolutionary theory doesn't have a problem if my social group wipes out your social group (this was the whole point of Trivers' analysis), so applying an unmediated universalistic ethic can't be right: a critique of pacifism. Within the group, however, skillful diplomacy and the ability to build strong consensus are clearly pro-survival. The earliest religions seem to have been programs for cementation of social bonds within social groups riven by discord. These religions were given a universalistic spin (e.g. Christianity, Buddhism, Islam) when they became the property of empires which needed legitimacy. Note that for these religions (possibly excepting Buddhism), universality stopped at the boundaries of the empire, where the pagan or infidel was encountered, and slaughtered.
My personal conclusion is that we should do those things which are in conformity with our natural design. Namely, act to maximise our abilities as human beings to live and work effectively together as one community, or an alliance of communities. Because we are inter-dependent, this is the best outcome for all our long-term interests, when it can be made to work. But sometimes you have to choose which group to align with, and then whack the other guy. However you call it, you will then just have to live or die with the consequences. The best idea is probably to choose to support those groups which (in your opinion) hold out the best hope for long-term human social progress - where you can identify which that is!
When I was doing AI (Artificial Intelligence) and involved in agent theory, I was struck by the following foundational issue: researchers usually decided arbitrarily which problem their system was going to address. It might be successfully stacking bricks on a table, playing chess, or constructing scene description from primary visual sensor data. Obviously the problem selected had a very considerable effect on the kind of systems and solutions generated. To get away from this inherently arbitrary problem-selection process, some researchers cycled back to considering homeostatic systems by analogy with biological systems, where the motivation is to survive (so as to have descendants). This was in fact the shape of the pre-AI research programmes in the golden age of cybernetics (e.g. Introduction to Cybernetics, W. Ross Ashby, 1956 - one of my early heroes).
Most biological systems are set endless tasks by the environment. By contrast, some humans have the luxury of being quite well-off, and have no immediate survival problems to address. How should they spend their time when all courses of action appear to be pointless? Wealthy and dysfunctional film stars often provide examples for analysis, as well as the recent angst expressed by the remaining dot-com billionaires as to how much to leave in their Wills to their offspring.
I am struck by how many people invest their lives in their enthusiasms (what we used to call "hobbies"). I am not the first person to observe that all enthusiasms seem slightly odd to those who don't endorse them. In the best of cases, people can make their lives into an art-form through the level of skill and accomplishment they develop in their special areas of interest. This implies that culturally-enriching the common human condition is as much a purpose of life as the more mundane team-activities which over the millennia kept our ancestors physically alive. What is less aligned with our underlying nature are such opt-outs as suicide, or the social-suicide of withdrawal from all social activity, where this cannot be placed in a broader social context. Crudely, to do these things pointlessly is, in the microeconomics jargon, shirking.
That was it for Clare, but I persevered and eventually accomplished the required three flights from the top of a rather steep hill, landing successfully each time. When I received my pilot licence, I told the instructor I would never fly a hang-glider again. I had already concluded that they were heavy, uncomfortable to fly, very expensive and inappropriate to the flat part of England where we were living (Essex). Instead my heart was set on paragliding.
My local paragliding club was based at North Weald airfield, near Harlow, where I worked at the time. The launch procedure was rather primitive. You stood, harnessed up, with the parachute held behind you like a wall by two of your fellow pilots. A cable was attached at chest height to your harness, and stretched ahead about a 1,000 feet to a jeep, tiny in the middle distance. When all was ready, a bat was waved and the jeep floored the accelerator. You were dragged forwards and into the air at 60 miles per hour, fighting the buffeting all the way up as you hit 750 feet above the airfield. The jeep stopped and you pulled the quick release and watched the cable fall away.
You now had a couple of minutes to fly around, pull stunts like stalls or centrifugal turns which swing you out like a merry-go-round, losing height until you lined up for the landing. It was fun.
After I had my paraglider pilot's licence for towed-launch, I went to the hills in Wales to get my licence for paraglider hill flying. That was fun too, and more varied as you could fly along a ridge line on the uplift, waving to walkers and gently adjusting to the way the lift varied along clefts and outcrops.
While paragliding was fun, it was scary in anticipation. I think everyone was rather quiet as we were in the truck driving out to the mountains - we knew too much about the failure modes, not all of which could be anticipated (air is invisibly unpredictable, and a canopy collapse close to the ground is unrecoverable).
I eventually gave up paragliding when I got used to it. Think diminishing returns vs. the disutility of spending a whole Sunday away from my growing family (you could read this as guilt eventually trumping self-indulgent selfishness). You can't dabble with flying: without constant practice you make mistakes, and the sport is unforgiving.
Saturday, May 15, 2010
Updated: December 2016.
I was a self-employed telecommunications consultant with 30 years experience of network design; product portfolio, network and systems strategy, and telecommunications architecture.
Now I'm retired and doing a variety of things as documented on my blog.
• Extensive experience of public carrier IP networking and next-generation network design.
• VP-level senior management experience in global corporations.
• Successful strategist and agent of change in large companies.
• Superior analytical, communications and delivery skills.
• The ability to develop a vision and inspire people through to completion.
Network architecture consultant to international law firm, London -- Jan 2012 - July 2012.
Worked with an international legal firm to define their requirements for a new enterprise network involving multi-site connectivity with data centres. Wrote the RFP and managed the vendor evaluation and selection process through to contract award.
Network architecture consultant to Peel Media City, Manchester -- Dec 2008 - Jan 2009
Network architecture consultant to Dubai World Central -- Jan 2008 - July 2008
Programme Management - BT Wireless Cities -- May 2006 - Sept 2007
Senior Consultant – Mentor Technology International, working with a number of clients including an MVNO developing a new GPRS-based proposition, a VoIP company moving to SIP, a major multinational bidding into BT's 21st Century Network programme and a leading satellite broadcaster on its IPTV/Video-on-Demand programme. -- April 2004 - Feb 2006
Vice President - Portfolio Development at Cable & Wireless Global -- April 2002 - April 2003
Chief Architect - Cable & Wireless Global -- Jan 2001 - March 2002
Technical Architect - Nortel Networks -- May 1990 - Sept 1999
Independent Consultant Feb 2006 - March 26th 2014
Mentor Technology International Apr 2004 - Jan 2006
Independent Consultant Apr 2003 - Apr 2004
Cable & Wireless Global Jan 2001 - April 2003
Independent Consultant Oct 1999 - Jan 2001
Nortel Networks Sept 1991 - Sept 1999
Standard Telecommunications Laboratories March 1982 - August 1991
PhD in Artificial Intelligence (“Agent Theories and Architecture”) 1985 - 1989
BA Maths & Computer Science (1st Class) 1984.
Open University 3rd Level Physics courses in Electromagnetism (SMT359, 2008) and Quantum Theory (SM358, 2009) (both with Distinctions).
Business Strategies for the Next-Generation Network -- Dec. 2006
PDF available here.
Nigel Seel: about me.
Here's a link to my 23andMe genome.
Sciencefiction.com: 2011-2012 I was a contributing writer of science features and reviews.
In 2010 I have worked with Cable and Wireless Worldwide and other clients on UK Government security accreditation at IL2/IL3.
In 2009 I did some work with Peel Media for their "Media City" project in Salford, Manchester. This was to design and cost a metropolitan area network (MAN) for the cluster of media companies expected around the new BBC facility there. This would be a super-high-capacity multimedia network.
During most of 2008 I worked in Dubai with Dubai World Central (DWC). DWC were building the world's largest airport, surrounded by residential, high-tech, exhibition and logistical cities as a new hub next to the Jebel Ali seaport. This complex, which was planned to house more than a million people in 2017, required a state of the art IP/MPLS network. I was working with a systems integration company and the client on the definition of network requirements, traffic, architecture, and design.
In January 2008 I had a brief one month contract, again through a subcontracting company, with the BT outsource project at Credit Suisse in Canary Wharf. My job was to set up a project organisation for the global voice and data equipment refresh programme.
During 2006 and 2007 I was working with BT Retail in their programme to roll-out metropolitan WiFi (the Wireless Cities project) in cities across the UK. This centred around negotiations with Councils as regards access to street furniture and possible applications they might care to take a lead in trialling. The core of the assignment was city project management of the diverse BT work-streams at city-level.
In early 2006 I worked with Mentor colleagues on a project with BSkyB developing the IPTV side of their business.
In 2005 I worked with VT Communications looking in detail at the commercial side of their global radio business and the business case for expansion and new Internet services (with Mentor).
Andrew Wheen from Mentor and myself presented a one-day workshop at IBC's conference on Fixed-Mobile Convergence in Barcelona at the end of June 2005, with a focus on IMS and IPTV. I also presented a paper.
Earlier that year, I worked with Inclarity, a leading wholesale VoIP Services Provider, to examine in detail their future SIP strategy.
In late 2004 I worked extensively with the Samsung team bidding into BT's 21st Century Network programme with focus on the MSAN (multi-service access node).
Earlier in 2004, I joined a project with a mobile virtual network operator. Among other things, this involves sorting out the tunnelling architecture for IP datagrams between the mobile partner's GGSN, the ISP partner's Internet backbone and the enterprise customer's network.
All of the above work was done via Mentor (which subsequently went into administration in March 2006). Prior to joining Mentor in April 2004 as an employee, I traded as Interweave Consulting and did work with BT on their SME Internet eBusiness platform (with Invocom) and with MCI on a review of their European business.
Before re-establishing Interweave Consulting, I spent two years with Cable & Wireless Global as VP Architecture and VP Product Development, with most of that time (the Internet boom) in North America. I joined C&W in 2001 after two years consultancy via Interweave Consulting. Interweave was set up in 1999 after I left Nortel where I was a Director specialising in carrier architecture and design. For more background, see my CV.
I think Google will still find this site (http://interweave-consulting.blogspot.com/) on its first search page for "nigel seel".
Update: True to Yahoo!'s invariable habit of getting things wrong, no sooner had they cancelled my website with blinding speed (yep, you can't go there any more) they also cancelled my "email@example.com" email account.
Now why did they do that? I pay for it separately and it's not web hosting ... it's email, guys.
I can only assume they have mindlessly bundled the two products so you can't have one without the other. God they are irritating. I have sent a query off to the amusingly-called 'customer care' team and expect an answer, oh, I don't know when.
In the meantime I CAN be emailed on firstname.lastname@example.org but if I don't get this fixed I'll be opening a googlemail account.
Tuesday, May 11, 2010
Somewhere within the centrist chez-Cameron Tories plus the Liberals minus their tree-hugging socks-and-sandals brigade must lie the nirvana of an electable modernised Conservative Party.
Surely this is Mr Cameron's Clause Four moment.
Over this last week I have had a horrid cold. As my throat ached and energy leached from my coughing body, with bleary hate-filled eyes I reviewed everyone who might have infected me. Luckily for them no likely candidate came to mind. I am now in the final stages of nose-running deafness and an immune system which is as battered as the country's deficit.
Is anything happening in Physics? Come on, we all paid the money, what is that Large Hadron Collider doing with all of Geneva's electricity?
Tuesday, May 04, 2010
Adrian and Beryl Seel in Wells
Clare on the Cathedral Green
Adrian and Beryl Seel at the May Fair
In the Rugantino Italian restaurant earlier that afternoon, as described in the previous post, the four of us: my mother, Clare, Adrian and myself had been waiting for our meal to arrive and discussing Adrian's forthcoming trip to Chile.
Me: "Have you been checking out the Michel Thomas Spanish tapes?" [They're CDs actually].
Adrian: "I'm going to a blitz on it just before I go, when I'm less busy."
Me (sceptically): Do you remember any of it? What does "Hasta la vista, baby" mean?
Clare: "'See you tomorrow', doesn't it? And the 'h' is silent."
Me (snorting): "Don't think that was what Arnie meant ..."
[Note: Clare is entirely correct on both points. Arnie's use is ironic.]
At this point an attractive waitress came up.
Clare: "Are you from Italy?"
Waitress (in perfect English): "No, I'm from Spain."
As the waitress departed I said:
"That was an ideal opportunity for you to ask her where she came from in Spanish."
Adrian forbore from reminding me that as we didn't initially know from whence she had come, it would have been more appropriate to have asked her in Italian ... if any of us had had any, that is.
Adrian: "I could have asked her, but what would have happened next?"
He waited expectantly.
Clare: "Right, she would have told you in rapid, perfect, fast Spanish."
Beryl: "That's exactly where you needed to change the problem into an opportunity. There's loads of responses - how about 'I didn't get any of that, why don't I buy you dinner sometime and maybe you could help me understand?'"
Adrian's sardonic expression invited us all to drop dead.
Monday, May 03, 2010
Yesterday we attended the candidates' debate in the Town Hall. From left to right across the stage they had the UKIP candidate, an earnest eurosceptic caver; an ENFJ social-working Green who came across as the clown candidate and the Tory Heathcoat-Amory who is apparently an amateur astronomer (Clare says that would be on his Scottish estate). Then we had the tribal, ranting, social-worker from Labour and finally the intelligent power-woman, Tessa with the unfortunate Green/CND backstory from the Liberals. The BNP guy was unavoidably detained.
The Town Hall was packed, I guess with several hundred people and there was much applauding and jeering although little new was said. I fell asleep about halfway through the 90 minutes.
Today we've booked lunch at the Rugantino Restaurant by the Cathedral. As we walked by this morning all the stalls and marquees were in place for the traditional May Fair on the Green so that's where we'll be this afternoon. [Update: Rugantino was absolutely delicious - we were in a little Italian bubble while we ate there. Service a tiny bit slow was the only quibble.]
Then it's back to Reading.