User:Selguha/Drafts/Role of Errors: Difference between revisions

From the Logical Languages Wiki
Jump to navigation Jump to search
m (de-italicized 'la pycyn')
(Added Part II)
Line 1: Line 1:
''The following is a reproduction of a serialized essay originally published by John Clifford, a.k.a. [https://mw.lojban.org/papri/User:John_Clifford la pycyn], on his blog [http://pckipo.blogspot.com/2015/06/the-role-of-errors-in-history-of.html ''sprachschaffunganmerkungen''] in 2015. This essay is one of the definitive critiques of Lojban, yet it remains largely obscure despite its author’s preeminence among first-generation Lojbanists. It deserved to be reprinted in a more accessible form. I have done some copy editing to eliminate typos and improve the formatting for this Wiki, but have not altered the content. – H.T.''
''The following is a reproduction of a serialized essay originally published by John Clifford, a.k.a. [https://mw.lojban.org/papri/User:John_Clifford la pycyn], on his blog [http://pckipo.blogspot.com/2015/06/the-role-of-errors-in-history-of.html ''sprachschaffunganmerkungen''] in 2015. This essay is one of the definitive critiques of Lojban, yet it remains largely obscure despite its author’s preeminence among first-generation Lojbanists. It deserves to be reprinted in a more accessible form. I have done some copy editing to eliminate typos and improve the formatting for this Wiki, but have not altered the content. – H.T.''


= The Role of Errors in the History of Loglans =
= The Role of Errors in the History of Loglans =
Line 113: Line 113:
Originally, this is not surprising, since the scientific foundations for this sort of description only appeared at the same time as Loglan began (1955) and the Loglans lost their contact with academic linguistics (they never had much with field linguistics) in the early 1960s, when these theories began to make some way. On the other hand, the epigones of the Loglans were largely computer scientists, and so theories of computer languages, which are more static – not to say linear – dominate most theoretical discussions of the grammar of the Loglans. This theory has been directed mainly at producing parsers to derive a grammatical description linearly (YACC and PEG seem to be the current models).
Originally, this is not surprising, since the scientific foundations for this sort of description only appeared at the same time as Loglan began (1955) and the Loglans lost their contact with academic linguistics (they never had much with field linguistics) in the early 1960s, when these theories began to make some way. On the other hand, the epigones of the Loglans were largely computer scientists, and so theories of computer languages, which are more static – not to say linear – dominate most theoretical discussions of the grammar of the Loglans. This theory has been directed mainly at producing parsers to derive a grammatical description linearly (YACC and PEG seem to be the current models).


But surprisingly, had the Loglans kept in contact with linguistics outside the computer field, in anthropology and philosophy and just pure pinguistics, they would have found that it was at the forefront of the field. According to not a few schools of linguistics, every sentence of every language is derived from a formula of some worthy successor of FOPL, by some appropriate form of the moves outlined above. The theoretical base is not, of course, strictly FOPL++, but an abstraction with essentially the same structure. And the moves will be different for each language, but basically of the same sort: shifting linear order, collapsing commonalities, eliminating detritus, and so on. The major difference for natural languages, aside from generally a much larger set of rules, obligatory and optional, is that they are not required to be reversible. That is, a single linear string of words can be derived equally correctly from very different formulae. So, again I come to the point that the Loglans’ interest lies entirely in monoparsing.
But surprisingly, had the Loglans kept in contact with linguistics outside the computer field, in anthropology and philosophy and just pure linguistics, they would have found that it was at the forefront of the field. According to not a few schools of linguistics, every sentence of every language is derived from a formula of some worthy successor of FOPL, by some appropriate form of the moves outlined above. The theoretical base is not, of course, strictly FOPL++, but an abstraction with essentially the same structure. And the moves will be different for each language, but basically of the same sort: shifting linear order, collapsing commonalities, eliminating detritus, and so on. The major difference for natural languages, aside from generally a much larger set of rules, obligatory and optional, is that they are not required to be reversible. That is, a single linear string of words can be derived equally correctly from very different formulae. So, again I come to the point that the Loglans’ interest lies entirely in monoparsing.
 
== Maxim Two: ''Loglan was designed to test the Sapir-Whorf Hypothesis''
 
This would be the sexy metaphysical Sapir-Whorf Hypothesis (SWH) of the 1920s into the ’60s. Although it was never formulated very precisely, the general idea was that the structure of the language you spoke conditioned the way you viewed the world, giving you a naïve metaphysics which pervaded your thoughts and culture. Over the years there were a number of more detailed positions about how strongly to take “condition”, from “nudging you in a direction” to “totally determining your world view”. The strongest position was hard to hold in view of the numerous expositions of metaphysics of incompatible sorts in languages of a certain type (Process Philosophy in plug-and-socket English, for example, or the fact that both Plato and Aristotle wrote Greek). The weakest claims hardly came up to the level of a hypothesis rather than a casual observation, since nothing really counted as a counterexample. But somewhere in the middle there seemed to be a significant thesis.
 
The roots of this discussion lay in the change around the beginning of the 20th century, from “civilizing” (deculturating) or killing tribal people to learning how they lived and viewed the world (empirical anthropology). And with that came studying the tribal languages in their own terms, rather than merely finding out how they expressed various things from Latin (or Hebrew or, for a really scientific approach, English) grammar. And, as these studies piled up, it became clear that people spoke languages radically different from one another and especially from English (and the rest of the Indo-European European languages). And it was equally clear that how they described the components and structure of the world were very different from the familiar categories of naïve Euro-Americans, and, indeed, from the theories of not-so-naïve philosophers.
 
The familiar languages, which came to be called Standard Average European (SAE), were plug-and-socket affairs of nouns, which filled holes in adjectives to make bigger nouny things, and verbs, holey things which eventually had their holes filled by the nouny things to make sentences. Now there were languages which seemed to have no nouns at all, only verbs, say. Even people’s names were verbs. And then there were languages that had only nouns (or maybe they were adjectives) and no verbs. And words that could not be described in familiar European grammatical categories.
 
These strangenesses extended to vocabulary also. Beyond the apocryphal tales of the twenty-seven Eskimo words for snow, there were facts like that some languages had no color words except “black” and “white”, or that they used the same word for ''blue'' and ''green'' (or different ones for ''dark blue'' and ''light blue''). These were less surprising, since there were occasional differences of this sort among the languages of Europe (or even within some one of them). But they tested out as genuinely affecting how people perceived the world. (Told to put all the blocks of the same color together, Navajo children regularly but the blues and the greens in the same pile, say.) And there was other evidence that what you called a thing affected how you behaved in relation to it (Whorf on empty oil drums, for example, or, more significantly, word choice in propaganda). But the most interesting such differences came in the details of the language, the essential categories, like (loosely speaking from an SAE perspective) tense and case. Many languages did not have tense at all, even when they had verbs, and what they had instead (i.e., to deal with time relations) were elaborations on aspects and the like from the richest of Indo-European grammars and far beyond. Similarly, what happened to nouns, when there were some, bore little relation to familiar cases, even to the complex constructions on Finnish nouns. They even overlapped with tenses in some cases. And these differences seemed to have metaphysical significance, since they spoke to how the world of space and time (or ''whatever'', it must be said at this point) was organized.
 
And now that the anthropologist-linguists could interview their subjects directly, rather than through an interpreter (or string of interpreters), they could get direct information about how they viewed the world. And what they found turned out to be a range of different metaphysics, of views about what is in the world and how it is put together.
 
Although there are different details for each group, they came to be grouped together into a few broad categories. There was, of course, the “natural” view of individual, independent things which took on properties and engaged in activities, but remained essentially the same throughout. Time and space were linear and were the framework within which things operated. By contrast, there was the world as a giant activity (maybe a process), involving countless subactivities and subprocesses which flowed into one another, or passed away or started up, with little vortices which were now part of one process, now of another and were counted as one only because of spatio-temporal continuity. Space and time were relative to particular processes and often circular as a result. Then there were the views that held that what there really were were enormous entities, variously spelled out as ''masses'' and ''universals'', and events were simply the collocation of chunks (or projections) of these archetypes, which were the primary individuals. Time and space were a derivative notion, if they played a role at all. (There were actually several other language classes and metaphysics discovered, but these three were the most discussed and developed and they show the essentials of process.)
 
Comparing their language data and their metaphysical data, anthropologists discovered some interesting connections. It seemed that speakers of SAE languages (even if spoken far from Europe) were inclined to view the world as independent things entering into activities and so on, and to speak languages with tenses, and take time and space as frameworks. And conversely. Similarly, process metaphysics and a relativist view of time went with languages which were virtually all verbs – most of which had aspects. And archetype metaphysics went with all-noun languages. Correlation is not causation, of course, and here it might go either way, so for several decades there was a search for a test to find whether there was causation (preferably from language to metaphysics).
 
So, in 1955, James Cooke Brown, a newly minted social psychologist and assistant professor at the University of Florida, hit upon the idea of constructing a language, Loglan, that was not like any other – certainly not like that of the students who would be his subjects – and running some experiments with it. He would test subjects in a range of psychological and cultural traits, teach them the language thoroughly, then test them again to see what changes (if any) appeared (teaching other students some familiar language as a control group). But constructing the language turned out to be more complicated than planned, as new ideas kept arising to be incorporated – and old ones needed to be discarded. So the experiment was never performed. But the idea of the experiment – and the language that was to embody it – gained some public notice (''Scientific American'', June 1960), and people asked about it. Brown had by then invented ''Careers'', a popular board game, and left academia, but from time to time he encouraged those interested in Loglan, getting some grants for developing the language and self-publishing various books about the language, giving enough details for people to manage intelligible utterances in it. In 1975, he started a major effort, publishing the most thorough books so far and starting an organization to promote the language (with many goals beyond that of a hypothesis test), including a journal for discussion of and in the language.
 
In the classic politics of international auxiliary languages (which Loglan always officially denied it intended to be, but…) Loglan spawned Lojban, a virtual clone (remembering that clones differ markedly in outward appearance), which, after an unpleasant lawsuit, proceeds on its independent way, diverging ever more from the original, as it too has developed. Neither language still says much about SWH, but each pursues other sorts of goals. The test of SWH, for which Loglan was started, has never been performed or seriously attempted.
 
And this is just as well, since Loglan is totally misdesigned for that purpose. Loglan is based on First-Order Predicate Logic (FOPL) and, though it has come to not look much like it, it retains that basic structure. But FOPL is the product of over 2,000 years of European development, put into final form around the beginning of the 20th century by English and German logicians (with significant help from French and Italian and eventually Polish ones); its entire history is in SAE languages. Not surprisingly, then, it is a paradigm case of an SAE language, terms plugging the holes in predicates to make sentences. As a result, teaching it to English speakers (the likely test subjects, but any Euro-Americans would do as well) would be merely exposing them to another language of the same type, presumably merely reinforcing their existing metaphysics rather than introducing a new one. I suppose one might try to find a group of speakers of, say, a process language and teach them a Loglan. But the process of devising appropriate tests for the new language and culture is prohibitive.
 
And futile. SWH in the metaphysical form dropped out of academic interest shortly after Loglan started up. Its underpinnings were made questionable (at least) by developments in the 1950s and ’60s in linguistics and the other social sciences. On the one hand, the differences between languages were found to be very superficial, with a basic common core across all languages. On the other hand, the way that people viewed the world and their place in it turned out, on more thorough examination, to be pretty much the same at the basic level. The great metaphysical differences proved to be merely a linguistic construct, made of inadequate analysis and incomplete observation.
 
In particular, in one major division in theoretical linguistics, sentences were seen as built up from particles very like terms and predicates into basic units, which were then combined and transformed through a series of processes, resulting eventually in an utterance. The stages at which an utterance came to take on the peculiar surface structure of a given language were very late in the process; in some versions, even just the last step before phonetic realization. While these theories are not universally accepted (or even respected), their analytic and explanatory power makes them a major force in the field. Even their opponents, those who point out, for example, that the process is too complex to allow for creating individual sentences on the fly in real time, or that it cannot account for changing sentences in mid-utterance, or that finding the same structure at the root in all languages looks suspiciously like an artifact of the procedures of analysis, still make use of some of the results. To be sure, some branches of this general pattern, like the claim that the basic structure just is FOPL – or, rather, an updated intensional version – are less widely held (or understood or developed) but are especially interesting to the Loglans, since they place its creation in the mainstream of linguistic research.
 
On the other side of the issue, the 1950s and ’60s saw a new drive to put more science in the social sciences (well, the linguistic developments were part of that, too). In particular, there was a growing interest for creating objective tests for characteristics that the various social sciences were interested in. A report on what a subject actually did in certain situations was generally considered more significant than what the subject said it was doing. Indeed, language-moderated data generally required some care in use, both from the subject and from the interpretations of the observers. So it was seen that people with different languages behaved very similarly in a variety of situations which were created (it was thought) to test the subject’s view of itself and of the world around it. The result seemed to be that people everywhere behaved as though they were separate entities, not vortices in a stream nor chunks of a greater whole, and that they interacted with other things which were also independent, separate, objects. While all manner of challenges have been raised to the interpretation of these results and not all have been met successfully, the basic likeness of the non-verbal responses to situations remains, whatever its explanation. So, the final word (you wish!) on SWH is just that, when speaking about their world view, speakers spoke languages which their examiners took literally: process-language speakers were viewed as having a process view of the world because they reported that view in a process language. But non-verbally they did nothing different that fit with the supposed view.
 
SWH had two other versions which persisted after the metaphysical one disappeared. One is the New Age version that grows out of the metaphysical. In the 1950 to ’70s (at least), when people were seeking some sort of mental/spiritual experience of a different world view, the suggestion (little understood in detail) that coming to speak a radically different language would produce this effect led many people (well, dozens) to learn the language of their particular path; Sanskrit, Chinese and Japanese, mainly, with no particular effect that could be traced to the language. Others, wanting to get away from all linguistic/cultural conditioning, sought to transcend language by meditating on sounds or meaningless phrases or expressing themselves in glossolalia, again with effects that did not seem to be particularly related to the unlanguage involved. But the idea moved to the science-fiction and hence conlang world, where it thrives. Starting a little early (1948) with Orwell's Newspeak, which makes its speakers unquestioning servants of the grammarian state, there have been languages constructed – or at least described – to manage all manner of useful traits: intelligence, happiness, spirituality, attractiveness, and so on. Aside from some doubts about how well these languages are designed for their intended purposes (one popular one aimed at promoting a positive attitude is overloaded, more than two to one, with negative terms), the results have not been confirmatory of the general plan.
 
The other SWH that survives is the vocabulary version, which was dismissed as uninteresting and trivial in the early days. This version actually received some support from the more objective tests that harmed the metaphysical version. To be sure, it was not all success: where the old test, telling Navajo and Anglo children to put blocks of the same color together, led to the Navajo putting blue and green blocks in the same pile, the new test, which omitted reference to color (but forced that as the deciding factor), resulted in all the children creating virtually identical piles. But at the micro level, those same Navajo children were slower to identify colors as being like sample one or sample two when both samples were in the turquoise range of the Navajo word. The differences were microscopic, but enough to show that some features – i.e., vocabulary – of a language do affect the way we see the world. The result most often seen touted as demonstrating SWH is the fact that Russian speakers, who have two words for blue, one for lighter and one for darker shades, are 0.17 seconds faster at identifying a sample flashed on a screen as being light or dark. I note this triumph without comment.

Revision as of 23:20, 12 June 2021

The following is a reproduction of a serialized essay originally published by John Clifford, a.k.a. la pycyn, on his blog sprachschaffunganmerkungen in 2015. This essay is one of the definitive critiques of Lojban, yet it remains largely obscure despite its author’s preeminence among first-generation Lojbanists. It deserves to be reprinted in a more accessible form. I have done some copy editing to eliminate typos and improve the formatting for this Wiki, but have not altered the content. – H.T.

The Role of Errors in the History of Loglans

Preface by the author

Disclaimer: This is my personal account, from my point of view. The events involved are described as I remember them and interpreted them. The science involved is as I understand it and extrapolate from it. I have tried not to assign blame here; I think most of the errors were inevitable in the situations involved and were probably seen as errors only in corrected hindsight (if at all). Others may have different memories or interpretations, read the science differently, and disagree about what was an error, but this is my story. Bring your own salt.

I am dividing this essay into sections headed by various claims that were made for Loglan (and Lojban). Most of the errors discussed here attach more or less well to one of these claims, a few others can be sandwiched in. These claims have played a major role in the spread of interest in Loglans and have, in various ways, guided developments over the decades, so they may be an informative and useful guide for presenting the problems.

Maxim One: Loglan is spoken Formal Logic (or Symbolic Logic or First-Order Predicate Logic)

In many ways, this is the root error, from which the others derive. Most of the features later claimed for Loglans or sought for them derive from similar features had by or claimed for First-Order Predicate Logic (and its predecessors into the 19th century and successors into the 21st). The formulae of First-Order Predicate Logic (FOPL) are syntactically unambiguous; there is only one way to analyze one. Translating an argument into such formulae provides a definitive way to demonstrate the validity of the argument (or its invalidity and where it goes wrong). Such translations also reveal misleading features of ordinary language, which give rise to many needless confusions and disagreements (and much metaphysics, some would say). Thus, FOPL is a valuable tool for rational discussion and for promoting understanding among people of different views, since it can be used to reveal the structures of any language.

Of course, the claim that some set of formulae was a translation of a given argument is open to some disagreement; there is no automatic procedure for such translations as there is for judging the validity of the translated set. Thus, the validity of many historic arguments (the Ontological, as a prime example) is still undecided. Of course, if the argument was given in FOPL – or an appropriately fleshed-out version of it – to begin with, this problem would disappear. So, the construction and use of such a language (partially realized for present purposes in careful ordinary German or English) became the goal of some logicians/philosophers from the ’20s on. James Cooke Brown, the creator of Loglan, studied with Broadbeck at Minnesota and was at least thoroughly exposed to this Logical Positivist tradition. So, whether consciously or not, the “logically perfect language” played a role in his choices when he came to create an experimental language.

Another major factor was simplicity. FOPL does away with the many parts of speech and with the variety of tenses, moods and modes, and cases of familiar languages. Among content words there are only two parts of speech, terms and predicates, and, while there are a variety of subtypes (more as logic developed beyond the ’50s) they all behave in the same way. Terms are divided into names, which stand for individuals (however that may be defined), and variables, which play a role in forming compound formulae, together, eventually, with compounded terms. Eventually there came to be terms of various sort, depending upon what was being counted as an individual, but this did not change the basic grammar. Predicates were divided according to the number of terms they required to make a formula (and, eventually, what types of terms).

A(n atomic) formula, then, was just a predicate with the appropriate number of terms (of the right sorts) in order: Faxb, for example. No cases or sentential roles, no prepositions, no tenses, etc. Beyond this were the recursive steps involving makers: a maker took a specified number of variables and formulae and returned a term or a formula, depending on its type (this gets somewhat more complicated later, but the basic pattern remains the same). Thus, &, a typical formula maker, takes two formulae and returns a new formula, their conjunctions: from Faxb and Gxc to (Faxb & Gxc). A, a variable binding predicate maker (quantifier), takes a variable and a formula to give a new formula, the universal generalization of the original formula: so from x and the previous formula we get Ax(Faxb & Gxc). The occurrences of x in this formula are now said to be bound by this quantifier, whereas before they were free. Similarly, @, an illustrative term maker, takes one variable and one formula and produces a new term, in which the variable is now bound: from x and Fx to @xFx, the salient F, say. The formulae used by a maker may be of any degree of complexity, and so may be the terms used in formulae. But the history of their construction, and so their ultimate structure, is always apparent: there is never any doubt about what formulae and variables (and whatever else) are involved at any level. At any stage in the development of, to, and beyond FOPL, the set of makers is closed, and introducing new ones (beyond mere abbreviations) takes a rather dramatic effort, even though the pattern for defining them is clear throughout.

Given this, spoken FOPL would seem to be an easy thing to achieve. We need an open class of expressions for names and another for variables and another for predicates, perhaps with some special markers for different types of each sort. Then we get some special expressions for the various makers in use. Then we just rattle the formulae off as they are written, using the recommended expressions for the various symbols. Every logic teacher does this every day to talk about the formulae on the board: “For all ex, both eff ay ex bee and gee ex see”, for the sentence above. Clearly, this ad hoc technique is not quite good enough for our purposes, even leaving aside the fact that we don't have any meaningful expressions here yet. The three classes of expressions are not separated; they are all just letters, and the capital/lower case distinction does not come across in speech. Then there are the parentheses, the left one here pronounced “both”, looking ahead to the connective to follow, and the right one omitted altogether. With a different connective, the left parenthesis would have been differently pronounced, as “if” or “either” or “as”, say, so we need either to deal with them all the same (as “paren”, say) or make the nature of the enclosed compound sentence clearer at the beginning.

(This is the way this problem arises in parenthesized infix – or Principia – notation; in other versions the problem arises in different ways, either by complexities on the connective to show how deeply it is buried in the compound, in labeled infixes, or, in prefix – Polish – notation, by the need to mark the division between component sentences.)

The right parenthesis can, in fact, always be dropped in sentence compounding (though it is often a kindness not to), but needs to be reintroduced (and, indeed, extended) in the case of compounded terms within a simple sentence: is Fa@xGxcb composed of the predicate F and the terms a and @xGxcb, or of that predicate and the terms a, @xGxc and b, or, indeed, of the terms a, @xGx, c, and b with predicate F? We must either enclose the formula in the composition in parentheses, if they are not already there, or else enclose the terms which follow a predicate in some sort of parentheses as well (F, say), and in either case, take care to pronounce both of these parentheses. (There are other, even more tiresome ways to deal with this problem, by always marking the number of places of each predicate, for example). But these can all be done rather cheaply: a few more words for constant characters, like right parentheses of various sorts. (Or, actually, for right parentheses, one sort is enough, if we put all of them in – but would we want to?)

The thought of a string of “ends” (say) at the end of every sentence is enough to show that spoken FOPL needs to be different from the written form, where adding a few right parentheses is a minor matter. So you need rules about when you can drop parentheses, when you can’t, and (probably) when you can but shouldn’t, for clarity’s sake. Or find another way around the problem. This is the first stage of the Loglans’ adoption of FOPL.

Step A: Atomic sentences

Loglan took as its basic sentence type, before any frills, a predicate with a fixed number of places (given in the glossary but not marked in the word anywhere, despite regular suggestions to do so). Predicates had a definite (though increasingly complex as the years went by) phonemic structure, so were distinctive. Names also were distinctive in a variety of ways, while term variables were given by a finite list and rules for extending by subscripts. Composite terms were formed by replacing the first term (which Loglan had moved in front of the predicate – an insignificant change, to aFxb and xGc) by an operator which did the work of a 1-variable, 1-formula term maker and by attaching the arguments of the predicate to the formula by explicit connectives, @G+c, for instance. This solved the first level of possible term misalliance, but for deeper ones, a right-hand end marker for these term makers was used as needed (i.e., when more terms for a higher component followed). The problem about only using the first term of a predicate was solved by a device creating new predicates in which the original first term and another term were swapped, from aFxb to xF′ab, for example, giving rise then to a term @F′+a-b, say. The situation calling for right-hand expression (RHE) markers is then something like H@xF′ b>, which would first appear as H@F′+@G+c-b, where the predicate to which b is attached is unclear, hence H@F′+@G+c]-b, however pronounced.

These and other changes created a new problem, when the predicate of a term might come directly before the predicate of a sentence, creating a potential ambiguity (predicate strings having been made legal – see later). One could make this a case where the RHE parenthesis of the term was used, but, since that could trigger a string of such parentheses, a separate divider was introduced. H@xFx becomes @F/H (or, still legal but riskier, @F]H). Since this automatically closes all the terms that went before, it suggests similar RHEs to close several terms, but not all without stringing out the term closers. This turns out to not help a lot, since learning different words for closing two, three, or N terms is less efficient that just using two or three or N closers. (Having that many open terms at any point is probably bad style, but grammar has to apply to bad style as well as good.)

The complexity of speaking even a relatively simple sentence makes one wonder if there is not some other way organize terms without loss of crucial information (what term occupies what place with which predicate). The answer so far is “No”. There are ways of reducing the reliance on order and devices for tagging terms according to what predicate they go with, but these all introduce yet more essentially empty and repetitive items, which then present complexities sought to be reduced. A case system, meant both to relieve the requirements for a fixed order and to give some meaning to the various positions – which now have meaning only if you remember the definition of the predicate correctly – does not simplify the need to shift order to make a term, nor does it help enough with the problem of dropped places (upcoming) to be worth the cost. It is not clear that the Loglans’ solution to making this structure speakable is the simplest or shortest or clearest one, but alternate proposals so far have offered no obvious advantages and have often had clear downsides. So let us call this a success: it keeps all the essential information but gets rid of as much superfluous verbiage as possible. That it is often notoriously easy to get wrong, whether by (unstylishly) leaving in unnecessary RHEs or by (disastrously) leaving out needed ones, is a problem for eventual textbooks. And one that gets worse as we get deeper into the language.

In making a language based in this way on FOPL, another inelegance arises. When using FOPL to transcribe arguments from another language, we naturally pick predicates that exactly fit the situation we are dealing with. But, in a Loglan, we have fixed predicates with fixed places. So, to deal with a particular situation, we may not need all places that the predicate supplies (or we may need one not supplied, but that is a later problem). For instance, the predicate briefly rendered “go” is actually a five-place predicate, “1 goes to 2 from 3 along route 4 using mode of travel 5”, so to say just “Sam goes to San Francisco” leaves three places unfilled. Since we don't at the moment care about what goes in there in fact (from here on Southwest by airplane, say), we don't want to say anything more (as we don't in English). The stock logical move in this case would be to bind each of these unused places with a particular quantifier, “some” (and, so, a number of at least implicit parentheses): Sx*Sy*Sz*sGfxyz***.

The Loglans can do that, of course, but that seems to be defeating the purpose of making this as much like other spoken languages as possible while keeping it as rigorous as FOPL. So, the Loglans have three responses, each ultimately going back to the official form. One is to introduce a new predicate based on the original but having on the interesting places (it holds of the mentioned things just in case there are things for the other places so that the original predicate holds for all of them together). The second is to insert a dummy term in the unfilled slots. And finally, and most pleasingly, the slots are just left empty. This last is the standard when the empty slots are all at the end, with no intervening filled slots. The dummies (there are several, for some reason) are used when a filled slot comes after an unfilled slot, though other devices can also be used with just the blanks. So “Sam is going by Southwest”, officially SxSySzsGxywz, might be sG- -w- (“-” for a dummy insert) or sG- -w or sG4w (where 4 is a marker that the next term is, in fact, the fourth one for the predicate) or, modifying the predicate, sG[1,4]w (dropping the other terms) or sG<2>w (rearranging the terms by exchanging the second and fourth), dropping the unused final terms in each case. Of course, restoring the FOPL original requires knowing the places of the original predicate so that the quantifiers can be properly placed, as close to the predicate as possible. These unmentioned quantifiers will come to raise questions in going beyond atomic sentences.

In the opposite direction, predicates may be extended by adding terms. Although the general idea of prepositions (or cases) to give overt meaning to predicate places was rejected, it is retained for situations where a predicate needs to be extended beyond its usual sense. So a new term is introduced by a marker that says what its role is to be. It can be inserted into the physical string of arguments at almost any place, though traditionally it goes at the end. Similarly, the possibility, briefly mentioned above, of a “preposition” to indicate which place of a predicate an argument fills is fully realized in the Loglans, though rarely used. The prefixes to the predicate that exchange the first place and another can be combined to create any order of arguments we want. However, these combinations are often long and not transparent for some rearrangements, so the prepositions are a better choice in speakability terms. Both these prepositional structures behave just like the regular arguments. In particular, they attach to the predicate of a term with the same ties as other arguments. They can even, with some adjustments, be made the term replaced by a term-maker. And they are closed off just like other terms. At this point is is fairly clear, if not rigorously demonstrated, that the Loglans’ reading of atomic sentences does represent the structure completely and accurately.

Step B: One-formula sentence makers

Sentence makers that require one formula (and perhaps something else) fall mainly into three groups: Negation, Modals, and Quantifiers (which also require a variable and possibly even another subordinate formula). In the standard presentations of FOPL, these go at the beginning of the sentence and their order is significant, since each attaches to the sentence formed by the sentence makers to its right back to the unmarked sentence. But beyond the structural significance of the order, there are clear semantic differences between, say “it was the case that someone [then] was a witch” and “someone [now] was a witch” – that there used to be witches and that there are still former witches around. So moving one past the other is generally not allowed. On the other hand, many modals and quantifiers come in pairs, strong and weak, such that passing negation through them changes one to the other: ~1~ = 2, so ~1 = 2~, and conversely. Of course, in any case, the maker governs the whole sentence that follows, negating it, casting it into the relevant alternate reality, or binding all its free occurrences of the indicated variable.

But in adding these items to a Loglan sentence, we discover something more about what is bound up in the expression “speakable”. We began by giving voice to the expressions of FOPL. Then we pruned the mass of punctuations to just those actually required to keep the structure fully marked. Now something new seems to have been added, which seems at the moment to be loosely familiarity. That is, the change here from formula to Loglan neither gives voice to new symbols nor eliminates detritus, but merely puts things in positions familiar from the L1s of likely learners of the language. In light of this, we can look back at the shift of the first argument from after to before the predicate and wonder: Was it just to make for a more efficient term formation, or was it also to bring the sentences into something very like the familiar Subject-Verb-Object order of the L1s of many likely students of the new language?

What happens with Negation and Modals is that they are regularly shifted from the front of the sentence to a place immediately before the predicate. The original position is always possible but is, in fact, rarely used, even with compound sentences. Quantifiers are also shifted inward, from before the sentence (“prenex”), to the place in the body of the sentence where the bound variable first occurs. To be sure, this move is done carefully, in that, for negation and modals, the original order is preserved – though negation tends to move left when the standard dualities allow (but this is stylistic, without logical significance). Similarly, the order of quantifiers of different sorts is preserved, argument places being rearranged to preserve order and further rearrangements forbidden if that order would be disturbed. So, assuming “x loves y” is xLy, “Everybody has someone who loves them” is basically AxSy yLx, which becomes AxL[1,2]y (Loglan has no free variables; every variable is assumed bound, particularly unless otherwise bound explicitly).

The movement of the quantifier over quantifiers is also treated carefully, though there is some controversy (resolved in different ways every few years) about exactly how that works. The basic positions are that negation, while represented just before the predicate, is to be understood as as far left in the sentence as possible, and that negation is to be taken as being where it appears to be. In the first case, quantifiers (and modals) may have to go through the logical place of the negation to get where they belong, and so are transformed in the usual way. In the second view, only makers that came before the original negation need changing. Whichever way is current, the proper original form remains (subject to the position finally assigned to the negation), though it will be different in the two cases. Ax~Fy, on the first view, might be from ~AxSyFxy or Sx~SyFxy or SxAy~Fxy, which are all equivalent. On the second view, it would be from Sx~SyFxy, since Sx, but not Sy, had to pass through the negation to get to its place. (We will later see cases where the negation comes at the end of a sentence, and the matter is slightly more complicated, but still resolvable.) In all these cases, then, the logical form is preserved – up to equivalence, anyhow. And, of course, the prenex version remains available (at slightly extra cost), just as the L1 probably contains the equivalent of “it is not the case that” and “it is possible that” and even “everything is such that”.

The case is less clear with some modals that do not come in pairs, like Past and Future. In standard Tense Logic, each of these is one of a pair of the usual sort: “somewhen in a past” and “somewhen in a future”, roughly speaking, but paired with “everywhen in pasts” and “everywhen in futures”. Thus, the usual negation movement is validated. But Loglan tenses are not quite like that. Nor are they like natural language tenses, built on a system of points (present, past, future, and retrofuture) and vectors (before, now, and after). Rather they are (mea culpa in here somewhere) an uneasy compromise between the two: vectors to begin with, but once the vector is traced, its head becomes a point from which a further vector can extend. As a result, PFa sometimes says no more than that Fa was once true, but at other times it says that Fa was true at a particular, but unspecified, past time. And negation reflects this in that ~PFa is sometime unresolvable and sometimes means P~Fa (The easy analogy is when we say “There is a man in the house. He ___.”; so here, “There was a time when ___. Then ___.”.) The best solution seems to be not to move tenses relative to negation, but the rules for this are neither so well spelled out nor so carefully followed. There are other modals with similar problems – “probably” and “certainly”, for example, but for different reasons. Still, with care, the original structure is retained, if not quite transparently.

The case of quantifiers, and especially restricted quantifiers, is a more profound change. Not only are the quantifiers moved inward from their prenex position, but they change their grammatical status, Quantifiers are no longer a separate sort of thing – a 1-formula sentence maker – but become simply terms. Syntactically, there is little difference between AxFb and @G,Fb, and there is none in the case of the resolution of AG,Fb from [AxGx]Fxb, where the values of the quantified variable are restricted to the non-empty class of Gs (the comma mark a separator between the predicate in the term and the one in the sentence, to prevent them being taken as a compound – on which more later). Quantifiers thus get involved in the place-shifting predicate changes, where care has to be taken to prevent changing the relative order of different quantifiers, although terms generally can move about freely. As with negation and modals, these problems could have been avoided by leaving everything prenex, though arguably this would make sentences of any appreciable length harder to understand – and it is not a common pattern in natural languages (so maybe not something hardwired in our understanding?).

Moving the quantifiers inward to the first place the bound variable occurs also means a loss of direct information about the scope of that quantifier (this is true for modals and negation as well, though the negation tends to get dealt with in various ways, using De Morgan and the like). If the variable bound by a quantifier occurs in some place other than the first one, the connection has to be made. In the case of simple quantifiers, this is done by repeating the bound variable, so AxLxx becomes AxLx. But with the restricted quantifiers, the ordinary anaphoric pronoun resources must be used, the variable having been swallowed: [AxGx]Fxx becomes AG,F[it], for some pronoun [it].

The Loglans have a plethora of pronoun systems, including assignable ones and ones that can be used on the fly, depending on such factors as the initial letter of the predicate in a term, the structural position of the original term in its home sentence, and so on. Despite this, it is not clear that every term can be represented unambiguously by a pronoun in every position, and certainly not clear that this can be done in a way that is easily interpretable by a hearer. Keeping the variables somehow would have eased this problem, which, admittedly, looms larger in theory than in practice. In any case, the scope of a quantifier is now determined to be the shortest sentence which contains the quantifier and all its anaphora (variables or pronouns). Aside from taking some care about what variables to use, this gives a practical solution, even if the sentence represented is only an equivalent to what one started with.

The issue of repetitions arises for regular terms as well, of course, and does not have a variable solution although a variable has been hidden in going from FOPL to Loglan. With regular terms there is the option of simply repeating the term rather than using a pronoun, and this can be used when clarity advises it. Repeating a quantifier tends to make for confusion: is this just a repetition, or is it a new quantifier with the same range? The convention is that “repeated” quantifiers are actually new ones with the same range. So, AG,FAG is [AxGx][AyGy]Fxy, a very different claim from [AxGx]Fxx (“Everybody loves everybody”, versus “Everybody loves himself”).

(By the way, there is a version of FOPL in which term makers are in fact treated as quantifiers. In such a system – or even in the present one with minor changes, if the variables were not suppressed – a large part of the complications of the Loglans’ various pronoun systems could be relieved by using the variables. This added “detritus” would pay other benefits as well, eliminating a major need for place shifting and for the separation between predicates – assuming we could also eliminate the shift from VSO to SVO order. Of course, place shifting has other virtues, like providing easy ways to match familiar concepts with very general predicates, as “destination” is hidden in “go” as the second place, to be shifted to first for independent use. The need for predicate separation markers – and term enders, for that matter – does not seem to have any separate use, and adds a variety of complications.)

Step C: Two-formula sentence makers

The two-formula sentence makers start with some adequate selection of the “propositional connectives” (the Loglans take AND, OR, IFF and REGARDLESS, though the last needs some extra work like the argument reordering for predicates). Added to these are similar connectives that go outside truth-value logic to causation in various senses and various sorts of modalities. Like subjunctive conditionals (hypothetical, contrary-to-fact, etc.) as well as alternate logics like strict entailment or analytic entailment or relevant entailment (and relevant or analytic disjunctions as well), and so on through the plethora of logics. But, for the most part, these additions do not make grammatical differences, and so do not need to be discussed separately here, even though the Loglans do accommodate some of them. (There is a similar plethora of logics for one-formula sentence makers, and the Loglans have some of them as well, but again, they are grammatically of a piece with the standard items.)

Historically there are two ways that these sentence makers (conjunctions) are represented. The dominant form is infix – or Principia – notation, where the mark of the conjunction goes between the two formulae and a pair of parentheses encloses the whole. The alternate form is prefix – or Polish – notation, where the mark goes before the pair and no further parentheses are needed. (There is, admittedly, a third possibility, postfix or reverse Polish notation, where the mark comes after the pair. This was used on some calculators back in the day, but never had much play in logic).

From the point of view of an attempt to eliminate detritus, prefix is obviously the most desirable version. But as a feature in a spoken language, it seemed to put a strain on memory and and analysis. It seems to be harder to grasp CCpKqrKCpqCpr than even the fully parenthesized ((p→(q&r))→((p→q)&(p→r))). And, in FOPL as used, numerous abbreviations were possible, dropping parentheses under a variety of rules, including various additions to the the markers to show relative depth and the like. Prefix notation does not offer much in the way of abbreviations, except marking when a string the same connective occurs, and this rather obscures structure than reveals it: C3pqpp is even more opaque that CCCpqpp.

The Loglans use both forms and, indeed, mix them in a single sentence. Obviously, this requires some care and, especially, devices for showing boundaries of component sentences: Kpq&r is just ambiguous as it stands, requiring parentheses somewhere or a convention that tells where they go: (Kpq & r) or Kp(q&r). But such explicit parentheses or conventions or other devices are needed already for the infix forms, in any case. As noted earlier, right parentheses are generally detritus – except in various situations where they are not. Right parentheses are needed more often, but they, too, can be dropped in many cases (and always the outermost ones if they begin the sentence). The rest of the infix cases depend upon conventions involving order of grouping (left grouping of similar conjunctions does not need parentheses – this and the following are not necessarily the Loglanic conventions, but familiar types) or type of conjunction (AND and OR don't need parentheses as components of IF). The Loglans also have depth markers, so that a conjunction marked n+1 is of a component of a sentence with a conjunction marked n. And there are conventions about whether the prefix or the infix marker dominates in a mixed sentence.

There is one more marker that is needed in the Loglans. In prefix notation in FOPL, the boundary between the two connected sentences does not need to be marked, since the new sentence always begins in a distinctive way: a new conjunction or a one-formula formula maker or a predicate, any of which close off the previous sentence, which was down to a string of terms, into which these new markers do not fit. But in the Loglans, a new sentence can begin with a term or a quantifier, which now counts as a term, and so can appear to continue the string of terms of the previous sentence. One could, of course, require closing out all the terms and the previous sentences to start afresh, but it is clearly more efficient to have, as in the case of the separation between subject term and predicate, a single marker to accomplish this necessity. As a plus, the separator can carry negations, which means that the initial conjunction can be simple and yet all of the logical relations be expressed.

With all these devices, it seems likely that any formula of FOPL can get an reasonably efficient, unambiguous Loglanic formulation – though, short of a fully parenthesized one, I am not sure this has ever been proven (or questioned, even). What is less certain is whether a given formulation is in fact unambiguous and, even if it is, that it is an unambiguous representation of the formula intended. As will be discussed later, the test for anamphiboly is not directly tied to the structure of FOPL and the presumed indirect connections have not been tested (or, for the most part, stated). For now, however, the general expectation is enough to continue the claim that the Loglans are spoken FOPL.

But conjunctions introduce several new kinds of repetitive redundancies. And removing this detritus introduces new kinds of expressions into the Loglans, which, in turn, suggest new kinds of expressions in FOPL, expressions which may have been there but were not discussed earlier. Some of these cases are just matters of convenience (more efficient usage, a branch of speakability); others are genuine new notions. Similarly, some merely expand on already given categories, others change the boundaries of familiar structures.

To take a simple case, “Sam is tall and Sam drinks beer” (symbolically (Ts & Bs)). Do we really – in a human language – have to (or want to) repeat the “Sam”​? Just about every L1 experience says not. The Loglans could, of course, use a pronoun here, but that is hardly a savings. So we want to collapse the two sentences into the single subject and a complex predicate. Now, in the logical toolkit there is a device for doing just this, using a predicate-making operator on a formula and a variable. This would result in \x(Tx &Bx) for the predicate, and the desired sentence would be \x(Tx & Bx)s: not an improvement. But we have some experience which suggests immediately that we 1) move the subject to the front and replace the operator; 2) assume the bound variable inside is the subject and so drop it as covered in front; and 3) drop the superfluous right parenthesis. This gives s(T&B, or even sKT,B. We do need the left marker still, since B might be a sentence in its own right under some circumstances. It also turns out, that if the & here is a different word, peculiar to joining predicates, the left parenthesis is not needed (except in more complex cases), so we can get down to sT+B. Curiously, this sort of change is not needed with K, since what follows the K up to the separator shows what sort of expression is involved. This factor will recur in what follows.

We can complicate this example slightly: “Sam is tall and Sam is going to San Francisco”: (Ts &Gsf). The first step in the collapse is \x(Tx & Gxf)s. But now, we need to proceed with some care, since the simple sT+Gf is unclear: f might be an argument to both predicates, especially if T is (as is usual in the Loglans) a predicate of more than one place with some later ones just not mentioned. There are two simple possibilities: either mark the end of the compound predicate to show that the following term goes with both, or mark the term as being connected with just the last predicate (similar to the connection within terms). The general dislike of RHE markers favors the second approach, sT+G-f, but, in fact, as cases become more complicated, with some terms going with only one predicate and some with both (and with more predicates involved), both systems have to be used, so sT+Gf is also correct for this case (the final parenthesis, after the f, not being needed).

All of this amounts to a change like that seen earlier with quantifiers: a formula maker has become a more inner grammatical type, a predicate maker in this case. At least, unlike the case of quantifiers, the relative scope of the collapsed sentence is not a problem, always being a component of what larger sentence it lies immediately within. When the collapse is extended, the abstracted sentence itself more than one level deep; there may be internal problems of relative depth, but there are surely enough mechanisms in place for the fully sentential forms that fairly straightforward modifications can be made for these cases.

This pattern calls attention to another. A logician confronted with “This is a tiny galaxy” would likely transcribe it as “This is tiny and this is a galaxy”, KTt,Gt, which a Loglanist would immediately want to turn back into tKT,G. But that Loglanist would also recognize that this is just not right; even the tiniest galaxy is not tiny (or even small).

So, how do we deal with these? Logic has a series of suggestions. The first is to simply say that “tiny galaxy” is a separate predicate, related to smallness and galaxies, if at all, only semantically and not formally. So a tiny-galaxy is indeed a galaxy and smaller than most other galaxies, but this is all additional information in the dictionary, not available grammatically, as it appears to be in the English. That is, the correct transcription is tW. This seems pretty unsatisfactory, even aside from the necessity of constantly creating new predicates which are related to existing one in similar ways.

The second approach (and Loglan proper did this at one time) is to say that a number of adjectives (call them) are in fact two-place, with the second place for some reference class; so “tiny” is actually “tiny for a ___”, with the argument “a galaxy” or “galaxies” or some such added somehow (and just how is open to several suggestions) but presumably as a term (*G in the Loglan, say). So, we end up with tKGT-*G. This is clearly better, but the repeated G looks like redundancy. To be sure, we do occasionally want to use predicates of this sort non-redundantly: “He is tiny – for a walrus”, say (meanly), hKHT-*W. But, when the reference class is given directly, this seems unnecessary (and so to be eliminated for speakability purposes).

So, the third approach is to produce a predicate maker which, in this case, asserts one predicate of the arguments, relativizes the other to that first, and then assert that whole of the arguments again. While this case is typical, fine analyses have found other cases where two or more predicates interact to create something new, though related in regular ways to the underlying basic predicates (adverbs, for example, like “very” or “rapidly”). While the Loglans have developed experimentally a number of markers for different sorts of such situations, the general approach has been to use simple concatenation (as in English), so back to tTG (the reference class comes last). Since both predicates may well have other relevant arguments than t and may be complex in the way discussed in the previous paragraph, some markers of grouping and subordination may be needed; but there seem to be enough of those, either in the forms used for sentential cases or in slightly modified versions, to guarantee that an unambiguous expression can be found for these cases. In addition, one of the concatenated expressions might itself be a concatenation, not a buried sentential conjunction. Sorting out the half-dozen or so readings of “pretty little girls’ school” (tested later on such things as “pretty little girls’ school teachers union regulations compliance monitors”) led to another system of prefix and infix and closure markers – parallel to those for collapsed sentential connectives – and some devices for resolving indeterminate scopes.

The opposite situation also often occurs: same predicate but different arguments – “Sam is going to San Francisco and Bob is going to San Francisco”. Again, an anaphoric solution is possible, but offers no advantages over the original. So, as expected, the Loglans create a compound term here – not corresponding to anything at all common in FOPL and its kin. So, we get something like (s&b)Gf or, again with less detritus, Ks,bGf; the occurrence of only a term between conjunction and separator shows that this is a term maker. The infix system needs a different form of the conjunction again (neither sentential nor predicate), s^b,Gf, more or less. Once you start on this course, of course, it is hard to stop. So “Sam is going to San Francisco and Bob is going to Los Angeles” is Ksf, blG (non-first arguments could always move in front of the predicate for rhetorical reasons, and so this poses no new issues) or sf^blG, with parentheses as needed in each case. These moves can be iterated to, say, Ksf.bDlvG: “Sam is going to San Francisco and Bob to either Los Angeles or Las Vegas.” The subordination of the components, though moved from the sentential to the nominal level, remains clear. But, in a case like KsbGDfl, “Sam and Bob are going to San Francisco or Los Angeles”, some doubt remains: are both of them going to one of the places, or is each of them going to one, perhaps a different one? Going back to the sentential level: DKsGf,bGf,KsGl,bGl or KDsGf,sGl,KDbGf,bGl.

The usual possibilities are available: we might reorder the terms so that the topmost conjunction comes first and so on, or we might mark each conjunction for relative depth. This whole approach can even be extended to cases which are not exactly parallel: “John is going through Chicago or by auto” – jGD4c5a. As noted earlier, the prefix notation is generally simpler here, since the same form can be used for sentences and most collapses (and markers added for nonsentential conjunction); the infix forms require new forms (typically related) for each sort of case: terms, predicates, term strings, and even subtypes within these.

When we say that Sam and Bob are going to San Francisco, there is no obvious suggestion that they are going together (whatever that means: on the same plane, in adjoining seats, for the same meeting, etc.), just that one is and the other one is, too. But sometimes it is significant that they are going together, and that should be marked. The straightforward way of doing this, a term-maker (of extendable number of terms, since the group need not be just two), raises some problems. In standard FOPL, terms refer to individuals, though that is not very precisely defined. These new terms clearly refer to sets or, at least, to more than one individual simultaneously (a little excursion into logic gets these two to amount to the same thing eventually). The fact that the collapsed sentential forms above also seemed to do so can be dismissed as being merely an appearance, not the ultimate situation. To be sure, the present new situation can, with some degree of plausibility, be reduced to the sentential case by a variety of devices: as a collapse of “Sam is going to San Francisco and Bob is going with him”, the latter predicate probably concatenating with the former; or, more simply, as a preposition, “with Bob”, attached to the main predicate, and its argument raised somehow (but quite regularly). Neither of these feels quite right and so the term maker is used, iterated for more than two involved terms. These terms can, obviously, interact with other types, from above, so markers for relative scope are needed throughout.

There turns out to be a similar situation with predicates as with terms: one thing with two or more different components. So, along with blue and black balls that are some blue and some black, there are blue and black balls that are each partially blue and partially black. This seems, possibly because it uses “and” in English, to be a special case of combining predicates, different from the modifying sort and the sentential collapse; and so it also receives its own markers (related to those for set building above, perhaps), and, of course, devices for marking relative scope.

And scope is the last issue to deal with; the scope of those prenex 1-formula markers moved inward early on. The Loglans tend to be very careful with negation, keeping it clearly over compound sentences by attaching it to connectives and making appropriate changes in quantifiers and modals in the move. The situation with quantifiers and modals is less clear. A prenex quantifier tends to be moved to the first occurrence of its variables, which may be deep in some compound sentence. Though there is a rule about heeding changes brought about by negations, passage through a negation scope is not always obvious. It may be more obscure if the quantifier is caught in a collapse and is buried in a term, not even a sentence. So, while the general story is that the scope of a quantifier is the shortest complete sentence that contains all the occurrences of its variable, it may not be easy to see what that is. And reconstructing the sentence may only work up to equivalence, not the real original (not that that is a bad thing).

For the restricted quantifiers, which have no variable to keep track of, the limits are the last pronoun that picks up that quantifier expression – and there may be several such in the course of a complex sentence. For the modals, there seem not to be strict rules but rather loose habits: a tense-marked predicate refers to events at that time; subsequent ones (unmarked) refer to that same time or ones later as the events flow naturally. Subsequent marked ones place their event according to the mark relative to where the time was when the predicate came along. Except, of course, there are markers for radical shifts – to now, for example, or some specified event. The case for non-tense modals is even less clear. One tendency is to take each as referring to the smallest possible sentence, the other is to take them as lasting until a countering modal comes along (an “in fact” to the ongoing “supposing”, say.)

Summary

In summary, the Loglans can be said to be spoken FOPL (or its current equivalent) in the sense that every sentence of such a language can be viewed as derived from a formula of FOPL by a series of transformations, which preserve meaning and structure, while reducing repetition and irrelevant items. I have sketched the major types of such moves above, skipping details, which are both very detailed sometimes and also have changed over the history of the Loglans and in the different separate languages. The crucial point is that these transformations are all reversible, that the original formula can, in principle, be recovered. A related feature is that the basic structure of that underlying formula is close to the surface, easy to see, since the transformation do not run deep.

Interestingly, the books about the Loglans (Loglan 1 and The Complete Lojban Language, preeminently) say little about all of this, but are focused more upon the relations of the language described to familiar languages (English first, of course). One would not really learn the grammar of FOPL from any of these books, and so lines like “this structure in FOPL gets transformed to this structure in Loglan” do not play much of a role, either as instruction or explanation. We do learn that basic sentences consist of a predicate and a string of terms in order, without any special marking for the roles of the terms, and that changing the order of some items is not to be done unless caution is used (with some English cases of what lack of caution could do). We learn that compound sentences come with a choice of representations, which will carry over to sentences of similar meaning which have compound predicates or compound terms. And we learn that certain sorts of delimiters can be dropped and others not in various situations, although this is based on problems about what comes next in a string of words, not about the end of structure as such. So, since the original transformation is not much discussed, the reversal plays no role; it is enough that the sentence is grammatical in this language, without considering whether it really represents FOPL.

Originally, this is not surprising, since the scientific foundations for this sort of description only appeared at the same time as Loglan began (1955) and the Loglans lost their contact with academic linguistics (they never had much with field linguistics) in the early 1960s, when these theories began to make some way. On the other hand, the epigones of the Loglans were largely computer scientists, and so theories of computer languages, which are more static – not to say linear – dominate most theoretical discussions of the grammar of the Loglans. This theory has been directed mainly at producing parsers to derive a grammatical description linearly (YACC and PEG seem to be the current models).

But surprisingly, had the Loglans kept in contact with linguistics outside the computer field, in anthropology and philosophy and just pure linguistics, they would have found that it was at the forefront of the field. According to not a few schools of linguistics, every sentence of every language is derived from a formula of some worthy successor of FOPL, by some appropriate form of the moves outlined above. The theoretical base is not, of course, strictly FOPL++, but an abstraction with essentially the same structure. And the moves will be different for each language, but basically of the same sort: shifting linear order, collapsing commonalities, eliminating detritus, and so on. The major difference for natural languages, aside from generally a much larger set of rules, obligatory and optional, is that they are not required to be reversible. That is, a single linear string of words can be derived equally correctly from very different formulae. So, again I come to the point that the Loglans’ interest lies entirely in monoparsing.

== Maxim Two: Loglan was designed to test the Sapir-Whorf Hypothesis

This would be the sexy metaphysical Sapir-Whorf Hypothesis (SWH) of the 1920s into the ’60s. Although it was never formulated very precisely, the general idea was that the structure of the language you spoke conditioned the way you viewed the world, giving you a naïve metaphysics which pervaded your thoughts and culture. Over the years there were a number of more detailed positions about how strongly to take “condition”, from “nudging you in a direction” to “totally determining your world view”. The strongest position was hard to hold in view of the numerous expositions of metaphysics of incompatible sorts in languages of a certain type (Process Philosophy in plug-and-socket English, for example, or the fact that both Plato and Aristotle wrote Greek). The weakest claims hardly came up to the level of a hypothesis rather than a casual observation, since nothing really counted as a counterexample. But somewhere in the middle there seemed to be a significant thesis.

The roots of this discussion lay in the change around the beginning of the 20th century, from “civilizing” (deculturating) or killing tribal people to learning how they lived and viewed the world (empirical anthropology). And with that came studying the tribal languages in their own terms, rather than merely finding out how they expressed various things from Latin (or Hebrew or, for a really scientific approach, English) grammar. And, as these studies piled up, it became clear that people spoke languages radically different from one another and especially from English (and the rest of the Indo-European European languages). And it was equally clear that how they described the components and structure of the world were very different from the familiar categories of naïve Euro-Americans, and, indeed, from the theories of not-so-naïve philosophers.

The familiar languages, which came to be called Standard Average European (SAE), were plug-and-socket affairs of nouns, which filled holes in adjectives to make bigger nouny things, and verbs, holey things which eventually had their holes filled by the nouny things to make sentences. Now there were languages which seemed to have no nouns at all, only verbs, say. Even people’s names were verbs. And then there were languages that had only nouns (or maybe they were adjectives) and no verbs. And words that could not be described in familiar European grammatical categories.

These strangenesses extended to vocabulary also. Beyond the apocryphal tales of the twenty-seven Eskimo words for snow, there were facts like that some languages had no color words except “black” and “white”, or that they used the same word for blue and green (or different ones for dark blue and light blue). These were less surprising, since there were occasional differences of this sort among the languages of Europe (or even within some one of them). But they tested out as genuinely affecting how people perceived the world. (Told to put all the blocks of the same color together, Navajo children regularly but the blues and the greens in the same pile, say.) And there was other evidence that what you called a thing affected how you behaved in relation to it (Whorf on empty oil drums, for example, or, more significantly, word choice in propaganda). But the most interesting such differences came in the details of the language, the essential categories, like (loosely speaking from an SAE perspective) tense and case. Many languages did not have tense at all, even when they had verbs, and what they had instead (i.e., to deal with time relations) were elaborations on aspects and the like from the richest of Indo-European grammars and far beyond. Similarly, what happened to nouns, when there were some, bore little relation to familiar cases, even to the complex constructions on Finnish nouns. They even overlapped with tenses in some cases. And these differences seemed to have metaphysical significance, since they spoke to how the world of space and time (or whatever, it must be said at this point) was organized.

And now that the anthropologist-linguists could interview their subjects directly, rather than through an interpreter (or string of interpreters), they could get direct information about how they viewed the world. And what they found turned out to be a range of different metaphysics, of views about what is in the world and how it is put together.

Although there are different details for each group, they came to be grouped together into a few broad categories. There was, of course, the “natural” view of individual, independent things which took on properties and engaged in activities, but remained essentially the same throughout. Time and space were linear and were the framework within which things operated. By contrast, there was the world as a giant activity (maybe a process), involving countless subactivities and subprocesses which flowed into one another, or passed away or started up, with little vortices which were now part of one process, now of another and were counted as one only because of spatio-temporal continuity. Space and time were relative to particular processes and often circular as a result. Then there were the views that held that what there really were were enormous entities, variously spelled out as masses and universals, and events were simply the collocation of chunks (or projections) of these archetypes, which were the primary individuals. Time and space were a derivative notion, if they played a role at all. (There were actually several other language classes and metaphysics discovered, but these three were the most discussed and developed and they show the essentials of process.)

Comparing their language data and their metaphysical data, anthropologists discovered some interesting connections. It seemed that speakers of SAE languages (even if spoken far from Europe) were inclined to view the world as independent things entering into activities and so on, and to speak languages with tenses, and take time and space as frameworks. And conversely. Similarly, process metaphysics and a relativist view of time went with languages which were virtually all verbs – most of which had aspects. And archetype metaphysics went with all-noun languages. Correlation is not causation, of course, and here it might go either way, so for several decades there was a search for a test to find whether there was causation (preferably from language to metaphysics).

So, in 1955, James Cooke Brown, a newly minted social psychologist and assistant professor at the University of Florida, hit upon the idea of constructing a language, Loglan, that was not like any other – certainly not like that of the students who would be his subjects – and running some experiments with it. He would test subjects in a range of psychological and cultural traits, teach them the language thoroughly, then test them again to see what changes (if any) appeared (teaching other students some familiar language as a control group). But constructing the language turned out to be more complicated than planned, as new ideas kept arising to be incorporated – and old ones needed to be discarded. So the experiment was never performed. But the idea of the experiment – and the language that was to embody it – gained some public notice (Scientific American, June 1960), and people asked about it. Brown had by then invented Careers, a popular board game, and left academia, but from time to time he encouraged those interested in Loglan, getting some grants for developing the language and self-publishing various books about the language, giving enough details for people to manage intelligible utterances in it. In 1975, he started a major effort, publishing the most thorough books so far and starting an organization to promote the language (with many goals beyond that of a hypothesis test), including a journal for discussion of and in the language.

In the classic politics of international auxiliary languages (which Loglan always officially denied it intended to be, but…) Loglan spawned Lojban, a virtual clone (remembering that clones differ markedly in outward appearance), which, after an unpleasant lawsuit, proceeds on its independent way, diverging ever more from the original, as it too has developed. Neither language still says much about SWH, but each pursues other sorts of goals. The test of SWH, for which Loglan was started, has never been performed or seriously attempted.

And this is just as well, since Loglan is totally misdesigned for that purpose. Loglan is based on First-Order Predicate Logic (FOPL) and, though it has come to not look much like it, it retains that basic structure. But FOPL is the product of over 2,000 years of European development, put into final form around the beginning of the 20th century by English and German logicians (with significant help from French and Italian and eventually Polish ones); its entire history is in SAE languages. Not surprisingly, then, it is a paradigm case of an SAE language, terms plugging the holes in predicates to make sentences. As a result, teaching it to English speakers (the likely test subjects, but any Euro-Americans would do as well) would be merely exposing them to another language of the same type, presumably merely reinforcing their existing metaphysics rather than introducing a new one. I suppose one might try to find a group of speakers of, say, a process language and teach them a Loglan. But the process of devising appropriate tests for the new language and culture is prohibitive.

And futile. SWH in the metaphysical form dropped out of academic interest shortly after Loglan started up. Its underpinnings were made questionable (at least) by developments in the 1950s and ’60s in linguistics and the other social sciences. On the one hand, the differences between languages were found to be very superficial, with a basic common core across all languages. On the other hand, the way that people viewed the world and their place in it turned out, on more thorough examination, to be pretty much the same at the basic level. The great metaphysical differences proved to be merely a linguistic construct, made of inadequate analysis and incomplete observation.

In particular, in one major division in theoretical linguistics, sentences were seen as built up from particles very like terms and predicates into basic units, which were then combined and transformed through a series of processes, resulting eventually in an utterance. The stages at which an utterance came to take on the peculiar surface structure of a given language were very late in the process; in some versions, even just the last step before phonetic realization. While these theories are not universally accepted (or even respected), their analytic and explanatory power makes them a major force in the field. Even their opponents, those who point out, for example, that the process is too complex to allow for creating individual sentences on the fly in real time, or that it cannot account for changing sentences in mid-utterance, or that finding the same structure at the root in all languages looks suspiciously like an artifact of the procedures of analysis, still make use of some of the results. To be sure, some branches of this general pattern, like the claim that the basic structure just is FOPL – or, rather, an updated intensional version – are less widely held (or understood or developed) but are especially interesting to the Loglans, since they place its creation in the mainstream of linguistic research.

On the other side of the issue, the 1950s and ’60s saw a new drive to put more science in the social sciences (well, the linguistic developments were part of that, too). In particular, there was a growing interest for creating objective tests for characteristics that the various social sciences were interested in. A report on what a subject actually did in certain situations was generally considered more significant than what the subject said it was doing. Indeed, language-moderated data generally required some care in use, both from the subject and from the interpretations of the observers. So it was seen that people with different languages behaved very similarly in a variety of situations which were created (it was thought) to test the subject’s view of itself and of the world around it. The result seemed to be that people everywhere behaved as though they were separate entities, not vortices in a stream nor chunks of a greater whole, and that they interacted with other things which were also independent, separate, objects. While all manner of challenges have been raised to the interpretation of these results and not all have been met successfully, the basic likeness of the non-verbal responses to situations remains, whatever its explanation. So, the final word (you wish!) on SWH is just that, when speaking about their world view, speakers spoke languages which their examiners took literally: process-language speakers were viewed as having a process view of the world because they reported that view in a process language. But non-verbally they did nothing different that fit with the supposed view.

SWH had two other versions which persisted after the metaphysical one disappeared. One is the New Age version that grows out of the metaphysical. In the 1950 to ’70s (at least), when people were seeking some sort of mental/spiritual experience of a different world view, the suggestion (little understood in detail) that coming to speak a radically different language would produce this effect led many people (well, dozens) to learn the language of their particular path; Sanskrit, Chinese and Japanese, mainly, with no particular effect that could be traced to the language. Others, wanting to get away from all linguistic/cultural conditioning, sought to transcend language by meditating on sounds or meaningless phrases or expressing themselves in glossolalia, again with effects that did not seem to be particularly related to the unlanguage involved. But the idea moved to the science-fiction and hence conlang world, where it thrives. Starting a little early (1948) with Orwell's Newspeak, which makes its speakers unquestioning servants of the grammarian state, there have been languages constructed – or at least described – to manage all manner of useful traits: intelligence, happiness, spirituality, attractiveness, and so on. Aside from some doubts about how well these languages are designed for their intended purposes (one popular one aimed at promoting a positive attitude is overloaded, more than two to one, with negative terms), the results have not been confirmatory of the general plan.

The other SWH that survives is the vocabulary version, which was dismissed as uninteresting and trivial in the early days. This version actually received some support from the more objective tests that harmed the metaphysical version. To be sure, it was not all success: where the old test, telling Navajo and Anglo children to put blocks of the same color together, led to the Navajo putting blue and green blocks in the same pile, the new test, which omitted reference to color (but forced that as the deciding factor), resulted in all the children creating virtually identical piles. But at the micro level, those same Navajo children were slower to identify colors as being like sample one or sample two when both samples were in the turquoise range of the Navajo word. The differences were microscopic, but enough to show that some features – i.e., vocabulary – of a language do affect the way we see the world. The result most often seen touted as demonstrating SWH is the fact that Russian speakers, who have two words for blue, one for lighter and one for darker shades, are 0.17 seconds faster at identifying a sample flashed on a screen as being light or dark. I note this triumph without comment.