PREFACE
Students often ask why Latin has so many inflectional endings and why Latin word order is so variable. Our Semantics for Latin (OUP, 2013) was devoted to the first question, offering interested readers a morphology-semantics interface, that is an account of the meanings of the inflectional endings in terms of a modern denotational semantics. The present companion volume tackles the second question, that is the syntax-pragmatics interface. In addition to providing a descriptive analysis of Latin information structure based on detailed philological evidence, it shows readers how to go beyond just an intuitive understanding of pragmatic meaning and formalize the informational content of the various word orders in terms of explicit compositional semantic derivations. Every Latin sentence has some word order, and in general each word order encodes a particular information structure: in this sense each word order has a different meaning. Using a slightly adjusted version of the structured meanings theory, the book shows how the pragmatic meanings matching each different word order arise naturally and spontaneously out of the compositional process as an integral part of a single semantic derivation covering denotational and informational meaning at one and the same time. Apart from its intrinsic interest, we hope that the material in this book will be of real practical value to students and teachers of Latin and, more generally, to scholars engaged in any discussion of Latin textual meaning.
We have again adopted the usual textbook style and refrained from giving point-by-point bibliographical references. The bibliography is limited to some suggestions for further reading. So it is important for us to take this opportunity to issue a general acknowledgement of our systematic indebtedness to the already quite extensive body of published and unpublished work on information structure and the syntax-pragmatics interface, the existence of which made it both possible and desirable for this book to be written. An earlier version of some material from the beginning of the book appeared in Generative Approaches to Latin Syntax, edited by Jaume Mateu and Renato Oniga (Catalan Journal of Linguistics 16). We would like to thank Susan F. Stephens for her kind help with the proofs and Helen Devine for a great deal of very smart and prompt it service.
A.M.D., L.D.S.
Syntax is there for a purpose: to guide the construction of semantic representations.
(M. Krifka 2014)
CONTENTS
Abbreviations ix
INTRODUCTION 3
A note to the student 3 · Compositional semantics 6 · Semantics for free word order 10 · Semantics for information structure 15
1 | BROAD FOCUS 22
Semantics for neutral word order 24 · Adjuncts 32 · Directionals 33 · Nonreferential objects 35 · Existential-presentational structure 37 · Verum focus 38 · Discourse cohesion operators 40
2 | NARROW FOCUS 45
Questions 45 · Narrow nuclear assertion 50 · Event presupposition 52 · Focus semantics 54 · Strong focus 57 · The alternates 63 · Branching focus phrase 73 · Plurals 77 · Double focus 83 · Embedded and repeated focus 87 · Verb focus 89
3 | ASSOCIATION WITH FOCUS 95
Adverbial quantifiers 97 · Emotives 104 · Comparatives and superlatives 106 · Cardinals 114 · Exclusives 119 · Simple additives 125 · Scalar additives 129 · Negation 134
4 | TOPICS, SCRAMBLING AND TAILS 143
Dislocated topics 143 · Locative topics 147 · Simple topics 150 · Contrastive topics 151 · Verb raising 155 · Scrambling 160 · Tail subjects 168 · Tail objects 175
5 | NOMINALS 179
Adjective phrases 179 · Nomina actionis 182 · Nomina agentis 186 · Relational nouns 188 · Postmodifiers 189 · Weak quantifiers 191 · Depictives 195 · Discontinuous modifiers in verse 197 · Postmodifier hyperbaton 198 · Premodifier hyperbaton 203 · Hyperbaton and prosody 207
Glossary 209 · Symbols 228 · Bibliography 230 · Index 233
ABBREVIATIONS
Ad AttCicero Epistulae ad Atticum
Ad BrutCicero Epistulae ad Brutum
Ad FamCicero Epistulae ad Familiares
Ad QfrCicero Epistulae ad Q. fratrem
AgrTacitus Agricola
AmOvid Amores
AmphPlautus Amphitruo
AnnTacitus Annals
Apul MetApuleius Metamorphoses
Ars AmOvid Ars Amatoria
Asc Ped CornAsconius Ped. In Cornelianam
Asc Ped MilAsconius Ped. In Milonianam
Asc Ped TogAsconius Ped. Toga Candida
AsinPlautus Asinaria
AulPlautus Aulularia
BacchPlautus Bacchides
BAfr De Bello Africo
BAlex De Bello Alexandrino
BCCaesar De Bello Civili
BGCaesar De Bello Gallico
BHisp De Bello Hispaniensi
BrutCicero Brutus
CaptPlautus Captivi
CatCicero In Catilinam
CatoCato De Agri Cultura
CatullCatullus
CelsCelsus De Medicina
CistPlautus Cistellaria
ColColumella De Re Rustica
Col De ArbColumella De Arboribus
ConSeneca Controversiae
CurcPlautus Curculio
Curt RufCurtius Rufus
De AmicCicero De Amicitia
De BenSeneca De Beneficiis
De DivCicero De Divinatione
De DomCicero De Domo Sua
De FinCicero De Finibus
De Har RespCicero De Haruspicum Respons.
De InvCicero De Inventione
De IraSeneca De Ira
De Leg AgrCicero De Lege Agraria
De Off Cicero De Officiis
De OrCicero De Oratore
De ProvCicero De Provinciis Consular.
De RepCicero De Republica
DemDemosthenes
DialTacitus Dialogus
Div CaecCicero Divinatio in Caecilium
FrontinFrontinus Stratagemata
Fronto AurFronto Ad Aurelium
GeorgVergil Georgics
Gran LicinGranius Licinianus
Hor EpHorace Epistles
Hor SatHorace Satires
In PisCicero In Pisonem
In VatCicero In Vatinium
JugSallust Jugurtha
JuvJuvenal
LucCicero Lucullus
MartMartial
MenPlautus Menaechmi
MercPlautus Mercator
MetOvid Metamorphoses
MilPlautus Miles Gloriosus
MostPlautus Mostellaria
NAAulus Gellius Noctes Atticae
NDCicero De Natura Deorum
NHPliny Naturalis Historia
OratCicero Orator
PetrPetronius Satyricon
PhilCicero Philippics
Pliny EpPliny Epistles
Post Red PopCicero Post Reditum ad Popul.
Post Red SenCicero Post Reditum in Senatu
Pro ArchCicero Pro Archia
x Pragmatics for Latin
Pro BalbCicero Pro Balbo
Pro CaecCicero Pro Caecina
Pro CaelCicero Pro Caelio
Pro CluCicero Pro Cluentio
Pro FlaccCicero Pro Flacco
Pro FontCicero Pro Fonteio
Pro Leg ManCicero Pro Lege Manilia
Pro LigCicero Pro Ligario
Pro MilCicero Pro Milone
Pro MurCicero Pro Murena
Pro PlancCicero Pro Plancio
Pro QuinctCicero Pro Quinctio
Pro Rab PerdCicero Pro Rabirio Perduell.
Pro Reg DeiotCicero Pro Rege Deiotaro
Pro Rosc AmCicero Pro Roscio Amerino
ProRoscComCicero Pro Roscio Comoedo
Pro SestCicero Pro Sestio
Pro SullCicero Pro Sulla
Pro TullCicero Pro Tullio
PropPropertius
Publ SyrPublilius Syrus
Quint Quintilian Institutio Oratoria
Quint DeclQuintilian Declamationes
Rem AmOvid Remedia Amoris
Rhet Her Rhetorica ad Herennium
RudPlautus Rudens
Sall CatSallust Catilina
Sall PhilSallust Oratio Philippi
Sen EpSeneca Epistulae Morales
SuasSeneca Suasoriae
SuetSuetonius De Vita Caesarum
Tac HistTacitus Histories
Ter AdelphTerence Adelphi
Ter EunTerence Eunuch
Ter HecTerence Hecyra
TopCicero Topica
TristOvid Tristia
TrucPlautus Truculentus
TuscCicero Tusculan Disputations
Val MaxValerius Maximus
Varro LLVarro De Lingua Latina
Varro RRVarro Res Rusticae
Vell PatVelleius Paterculus
VerrCicero In Verrem
VitrVitruvius
Introduction
A note to the student
Beginning students are often understandably mystified by the variable word order they find in Latin prose texts. Compared to the regular and systematic structures of English, Latin seems to offer just aimlessly jumbled word salad. A simple sentence reporting an event in which a killed b, a common enough occurrence in Roman history, can surface in half a dozen different word orders
(1) Postea maritus eius tyrannum occidit (Con 2.5.Pr)
Pedanium Secundum servus ipsius interfecit (Ann 14.42)
Brutus occidit liberos (Quint 5.11.7)
Patrem occidit Sex. Roscius. (Pro Rosc Am 39)
Interfecit Opimius Gracchum. (De Or 2.132)
At occidit Saturninum Rabirius. (Pro Rab Perd 31).
The sentences all consist of a function (the verb (V), occidit/interfecit) and its two arguments (the subject phrase (S) and the object phrase (O)). The latter are either a proper name or a definite, so two individuals (except for the subject in Ann 14.42, which is indefinite, since Pedanius had four hundred servants). Each sentence exemplifies one of the six possible orders in which three different items (S, V, O) can be arranged: SOV (Con 2.5), OSV (Ann 14.42), SVO (Quint 5.11), OVS (Pro Rosc 39), VSO (De Or 2.132), VOS (Pro Rab 31). The beginning student has a natural tendency to translate the first argument in Latin as the first argument in English, but that turns the victim into the aggressor in half of the examples, so that Rabirius would get killed by Saturninus rather than vice versa. The problem is that the grammatical relations are not encoded by the serial order but by the inflectional endings. Latin teachers come to the rescue with apparently helpful instructions like “First find the subject, then take the verb, then find the object.” Such instructions effectively block the unwanted revision-
(1) Afterwards her husband killed the tyrant (Con 2.5.Pr). One of his own slaves killed Pedanius Secundus (Ann 14.42). Brutus killed his children (Quint 5.11.7). Sex. Roscius killed his father (Pro Rosc Am 39). Opimius killed Gracchus (De Or 2.132). But, you say, Rabirius killed Saturninus (Pro Rab Perd 31).
ist translations, but, if you think about it, they simply amount to a procedure for changing Latin word order into English word order. This procedure neutralizes any difference in meaning between the six orders: it treats them as entirely equivalent; it would work just as well if every sentence in every Latin text had its words arranged in alphabetical order. In fact, however, our six sentences are not freely interchangeable. Each different word order is appropriate in a different discourse context, so they cannot be randomly ordered equivalents. A mismatch results in incongruity and causes the discourse to crash. Similarly in English, if we are discussing who stabbed Caesar, you can’t say “It was Caesar that Brutus stabbed,” although that would be a perfectly congruous response if we were discussing who Brutus stabbed. Each word order represents a different information structure with its own pragmatic meaning. We should not simply discard these pragmatic meanings, leaving just a single semantic meaning for all six sentences: to do so would be to lose a component of meaning that is explicitly encoded in the structure of the linguistic expression by the word order. Rather, each sentence should have its own specific derivation, in which the semantic and pragmatic meanings are calculated in tandem (and not, at least in our theory, independently). Different pragmatic meanings are created by different compositional operations; at the same time, the different derivations converge on a single final semantic meaning, in the case of (1) just ‘a killed b.’ For the reader to see how this works in practice, we will need to show how specific pragmatic meanings are computed from specific word orders in an overall coherent analytical framework. In short, we will need a philologically sensitive syntax-pragmatics interface for Latin: that is what this book is designed to provide. You will not find a comparable discussion in your Latin textbook and grammar (or anywhere else, for that matter), because the technology of formal pragmatics was only developed fairly recently; but it is now well-established, and there is no reason why we shouldn’t use it for the Latin syntax-pragmatics interface, since nineteenth-century traditional grammar is too low-tech for this particular task. The details of the specific analyses we propose will have to stand the test of time, but we hope to have shown in principle how the techniques of formal semantics and pragmatics can be used to help solve an age-old problem in Latin grammar in a way that was not possible, nor even conceivable, with just the resources of traditional grammar.
Because of its interdisciplinary character, this book does not fit into any of the familiar categories. For instance people have asked us whether the book is an introductory text. The answer is yes and no. The book offers readers who do not know about the subject a fairly comprehensive account of the relevant material. On the other hand it is not suitable for the complete novice: some preliminary familiarity with the basics of formal syntax and semantics is presupposed. To make things easier for the student we provide a glossary of a couple of hundred technical terms that are used in the text. The glossary entries aim for explanatory notes rather than formal definitions. Next question: what is the book about? The title suggests that it is about pragmatics, but its scope is actually narrower than the full field of pragmatics: it concentrates on the syntax-pragmatics interface,
which is obviously a major issue for anyone learning Latin. Then is the book about syntax? No, it is about how syntax interfaces with pragmatic meaning, that is about how pragmatic meaning is derived from syntactic structure. We sometimes leave the specifics of a syntactic analysis vague when they do not affect the semantics. With some minor modifications we use the syntax presented in our earlier work on Latin word order, which is based on scrambling, topicalization and crosscategorial discourse functional projections. A rich body of philological evidence supporting an analysis along these lines was presented in our earlier work, and we did not think it necessary to repeat it here. The reader is referred to our earlier work for documentation and substantiation of syntactic generalizations that are unsupported or just briefly exemplified in the present work; specific page references are provided at the end of each chapter. Finally, is the book a textbook or a research monograph? Both: since it is the first book to be written on this particular subject, naturally enough it contains a lot of new material. Specifically, while the theoretical framework is one of the standard theories (slightly modified), the application to Latin is new.
Some readers might wonder whether we were not embarking on a project with an unstable foundation: information structure is a rather slippery subject with an ill-defined and often loosely used terminology that unfortunately tends to conflate pragmatic categories with syntactic categories. Some terms do have fairly precise formal definitions. Strong focus is defined in terms of the set of alternates it evokes. Contextual givenness is defined in terms of generalized entailment: an expression E (of any type, individual <e>, propositional <t> or functional <...,t>) is given if there is an antecedent A (possibly accommodated) in the local discourse context such that [[A]]c [[E]]c. The indeterminacy of givenness arises from problems associated with the locality of the antecedent and the conditions that license accommodation, which sometimes make it difficult to predict whether a speaker will treat a constituent as given or not. These indeterminacies in the definition of information structure are real enough, but they are not relevant to the objectives we are pursuing in this book, for the simple reason that we are not trying to predict anything. We just need to elicit the information structure of a sentence as it occurs in its context. That is a task that listeners and readers perform successfully, instantly and effortlessly when they identify the question under discussion and check the congruence of successive sentences in discourse. (The examples we give in the text have been checked in context, even though, for reasons of space, they are cited out of context.) Similar considerations apply to the linguistic encoding of information structure by the syntax and/or prosody. For instance, it is quite a challenge to predict the conditions and incidence of clefting in English, but that is not relevant for our project, because again we are not in the business of making any such predictions. Our semantics does not rest on the (obviously false) inference “if a strong focus, then the pivot of a cleft,” but rather on the inference “if the pivot of a cleft, then a strong focus,” which can easily be verified in context for any particular example. Once it has been determined that clefts exist in English (for instance, they show up in a cor-
pus of texts), it becomes important to ask how the meaning of such clefts as do occur is computed. This latter question is the sort of question we are addressing in this book. Neither the frequency of a pragmatically driven structure nor the ambiguity of some examples is particularly relevant. If some structure occurs, it must have a semantic derivation, which is what we aim to provide. To give a Latin example, positing a crosscategorial focus position in the extended XP means that we need to provide a semantics for FocXP but does not in and of itself preclude the existence of focus in situ. (Likewise, calling Latin a discourse configurational language means that it has pragmatically defined syntactic positions, not that it always uses them.) In this book we propose semantic derivations for the syntactic structures arising from a whole range of correlations between syntax and information structure, some applying mainly to certain styles or periods of Latin, others holding for Latin in general. These correlations, which are very reliable and not at all impressionistic, were established in our previous work on Latin word order. There we regularly stated the texts sampled, the sampling criteria and sample size, and we gave relative frequencies in percentages. For many bivariate correlations (over twenty) we reported the results of significance tests, giving the c2, the strength of the correlations as measured by the odds ratio w, and the significance level. Sometimes the statistical relevance of an observation was obvious and hardly worth proving, for instance that Latin is SOV, not OSV. Sometimes a nonquantitative evaluation was supported by the overall predictions of the theory; once you have a theory, you don’t need to prove each case individually: that is what theories are good for.
The next section of this introduction provides a concise overview of the basics of compositional semantics, which is the tool used throughout this book to derive meanings from the different Latin word orders; we also go over the (somewhat simplified) notation we use. We do not provide any detailed account of the theoretical frameworks and technical machinery that we decided not to adopt; these are just briefly characterized in the last two sections of this introduction. There we argue that a semantics for free word order that is based on relaxing the prescribed order of functional application does not fit the Latin philological data, and that some popular formalizations of information structure, which seem to have been designed with English in mind, are not well suited to the Latin language. These last two sections are quite technical and readers who are willing, at least provisionally, to take our theoretical choices on trust can safely skip these hors d’oeuvres and get straight to work on Latin in Chapter 1. On the other hand, should you wish to pursue these issues in greater detail, you can look at some of the titles cited as further reading at the end of this and other chapters. These references include works that are particularly useful and comprehensive and which supply comparative data from modern languages.
Compositional semantics
Consider a sentence like ‘Eporedorix was captured by Caesar.’ It is natural to think that each of the words in the sentence has some meaning (even if in the
marginal case it is just the identity function), that the sentence too has a meaning (we can tell whether it is a true assertion or a false one), and that the meaning of the latter depends in one way or another on the meanings of the former. Our task is to say exactly how (taking the sentence as a static whole, not incrementally as it is perceived in real time). We will want to interpret this sentence relative to a slice of history that is the cavalry battle before the siege of Alesia in 52 B.C. (BG 7.67). If we try to interpret it in some other context, it might be false (for instance if noone was captured) or it might be uninterpretable (for instance if Eporedorix was not among the cast of characters in the context). The interpretation of the proper names is straightforward enough, but what about predicates like ‘captured’? Say someone asks you to define the meaning of a predicate like ‘Italian.’ You might come up with a distillation of the salient characteristics of Enrico Caruso, Vittorio de Sica and Sophia Loren, but for the purposes of compositional semantics such a definition is not mathematically tractable: how do you use it? It is much simpler to define predicates in terms of the set of their members. Then ‘Italian’ simply denotes the set of Italians. Nothing needs to be said about concepts or cognitive representations, which is not very helpful if you are doing lexical semantics; but for compositional semantics it is just what you want: ‘captured’ denotes the set of individuals in the domain who got captured. Predication can then be understood as set membership: Eporedorix was a member of the set of those captured: Eporedorix {x: Captured(x)}. Now let’s make the verb active and replace the proper names with a corresponding definite description: ‘The Roman commander captured the noble Aeduan.’ Here we first need to compose the meaning of the complete subject and object phrases; then we need to compose the meaning of the verb phrase ‘captured the noble Aeduan,’ and finally we need to predicate the verb phrase meaning of the subject phrase. Evidently the compositional system is local: the meaning of larger expressions is computed in terms of the meanings of the smaller expressions they contain. If we compose noncontiguous words we will get either a different meaning or a presupposition failure or just gibberish: ‘The noble commander captured the Roman Aeduan,’ ‘Noble captured Aeduan Roman commander the the.’ (This is one of the problems raised by hyperbaton, which we will analyze in Chapter 5.) Saying that composition is local means that it respects syntactic constituency. For instance the meanings of ‘noble’ and ‘Aeduan’ are combined to give a single meaning for ‘noble Aeduan,’ which is in turn combined with the determiner ‘the’ to pick out the unique contextually salient noble Aeduan; the final result is then used as the subject of the predication. More generally, the syntactic tree is binarily branching, and semantic composition progressively combines the meanings of sister nodes to give a single complex meaning for each mother node. Since the meaning of the verb phrase constituent is composed before it combines with the subject, the argument positions are saturated one at a time: the arguments are not composed as an ordered pair in a single step as they are in predicate logic. (This is like applying an ‘add 5’ function to 3 instead of applying an ‘add’ function to <3,5>.) The resulting sentential meaning can still be expressed in
set-theoretic notation, but this leads to nested sets that are difficult to read. So it is more usual to represent sets in terms of their characteristic functions, written in lambda notation. Lambda notation was developed as a way of writing mathematical functions without creating an ambiguity between the function itself and the value it returns: ln.n2+3 is the function that squares any number it is applied to and adds 3. So instead of {x: Captured(x)} (the set of those captured) we usually find lx.Captured(x): here lambda is the function operator, x is the variable on which the function operates, and ‘Captured’ (the h-reduction of the longer l-expression) is the characteristic function, which applies to each member of the domain of individuals and returns a truth value, True (1) for each individual who was captured and False (0) for each individual who was not. The function can be spelled out extensionally by listing its ordered pairs: {<Eporedorix,1>, <Cotus,1>, <Cavarillus,1>, <Vercingetorix,0>, etc.}. In this system each compositional step is a functional application. A transitive verb like ‘capture’ is then a function from an individual (the direct object) to a function from an individual (the subject) to a truth value: lylx.Capture(x,y). The variables in the lambda prefix are listed in the order in which they are composed; in the system adopted in this book, the variables in the scope (the body of the lambda expression) are listed in the hierarchical order for grammatical relations. In a linear expression that includes both the function and its argument(s), it is useful to delimit the body inside square brackets, for instance lx[Captured(x)](Eporedorix); when a lambda expression appears in a semantic analysis tree, its argument is on a separate branch, and a dot is sufficient to separate the prefix from the body.
In order to enhance readability, we have opted to prune a lot of commonly used notational clutter that might otherwise frighten the horses. Take the simple sentence Surgit Clodius ‘Clodius gets up.’ The semantics for this sentence could be given as follows: [[Surgit]]M,w,g([[Clodius]]M,w,g) = lx[Surgit(x)](Clodius), with object-language expressions in bold type and enclosed in double brackets and the denotation expressed in predicate logic with the lambda operator and higher-order variables, using Latin as the metalanguage. We will simplify this by using just the denotation (the part following the equals sign) and dropping the prime symbol. In particular, our semantic analysis trees normally give just the denotation; very occasionally the object language expression is added in bold type. The suprascripts are dispensable because they can be replaced by defaults. Suprascript M relativizes the interpretation to a model; we assume that the model is provided by the contextually relevant situation. Suprascript w relativizes the semantics to a possible world; our semantics is extensional unless otherwise noted; sentences are evaluated relative to the state of affairs in the actual world. Where intensionality becomes relevant, it is expressed by the addition of a world argument to the predicate. Suprascript g relativizes the semantics to the assignment function; we allow the assignment function to operate in the background. Pro-dropped and anaphoric pronouns are written x pro and x ana respectively. Traces are coindexed in the syntax and represented by arbitrarily assigned
free variables in the semantics. In our notation these free variables are alphabetically distinguished rather than numerically indexed. Abstraction over free variables standing for argument noun phrase traces is controlled by inflectional case endings.
Functional application is not directional, so we can’t stipulate that the function is always on the left branch and its argument on the right branch, or vice versa. It is also not always possible to use the lexical definitions of words as a basis for distinguishing functions from arguments in the tree, because typeraising can switch the function-argument relation. So it is helpful to have a system that classifies the nodes according to their functional type. Starting with two basic types, <e> for entity (individual) and <t> for truth value, we can define everything else by specifying the types of the domain and of the range of the function. For instance, a predicate is a function from an individual to a truth value <e,t>; a transitive verb is a function from an individual to a function from an individual to a truth value <e,<e,t>>, or <e,et> for short; a generalized quantifier is a function from a predicate to a truth value <et,t>; and so on. Additional simple types can be added for events, worlds, times and numbers; however we often abstract away from worlds, times and events to keep the derivation as simple as possible. If the types of two sister nodes are not compatible, the computation cannot proceed further up the tree. The semantic type system is pretty much homomorphic with the system used in categorial grammar to define syntactic constituents: for instance, a transitive verb is defined as something that combines with a noun phrase to make a constituent that combines with a noun phrase to make a sentence, (S\NP)/NP in one version of the formalism. The notation we adopt designates certain letters of the alphabet as a default for certain variable types, which makes explicit typing redundant in many cases. For instance, we use lower case letters from the end of the alphabet (x,y,z) for individual variables and upper case P,Q for predicate variables; so x is automatically understood to have type <e> and P to have type <e,t>. But since we use the latter both for sets of individuals <e,t> and for sets of events <e,t>, ambiguity can still arise if the types are not spelled out. While we often specify the types in our derivations, we do not do so in all cases; types add clarity but they also aggravate clutter.
We also make systematic use of five operators in our analysis, the first and most important of which is abstr, the function abstraction operator. This operator is used to abstract on a free variable in an open formula to create a function. When abstr(x) applies to the free variable x in the expression Captured(x) of type <t>, it returns the predicate lx.Captured(x) of type <e,t>, which can then be applied to the argument Eporedorix to give the sentence Captured(Eporedorix). Lambda abstraction over free variables is the basic mechanism that makes compositional semantics possible in free word order languages. There is one formalism that uses a separate store in its notation to carry the unresolved free variables up the tree. The operator ty is used to raise the type of an expression, shifting an argument into a function. For instance in the same example Eporedorix is the argument of the function lx.Captured(x),
and has type <e>. When ty is applied to Eporedorix, the type of the name is raised to <et,t>, the type of a generalized quantifier, which takes the verb as its argument: the property of getting captured is a member of the set of properties that characterize Eporedorix (sometimes called his individual sublimation, or, in lattice-theoretic terms, the ultrafilter generated by his lattice element): lP[P(Eporedorix)](lx.Captured(x)). More generally, ty operates on the argument of type <a> of a function of type <a,b> to create a new function the argument of which is the original function of type <a,b> and the range of which is the same as the range of the original function (type <b>). The operator ex.cl is used to introduce existential closure over sets. For instance when ex.cl is applied to lx.Captured(x), the result is x.Captured(x). We often use ex.cl to apply existential closure to a set of events. The operator det is the iota operator: it shifts the description of a set into the description of its unique salient member, thereby changing the type from <e,t> to <e>. But for simplicity, we usually just write ‘dux’ for ‘iy.Dux(y).’ Finally the operator exh adds an exhaustivity or exclusivity formula: exh(x) means that x is the unique individual to have the property in question. exh is the only one out of the five operators that does not shift the type of the expression it applies to.
The syntax we use is a bare-bones preminimalist lingua franca based on the Xbar structure; that way our semantic derivations can be as clean and clear as possible. To keep things simple, we use a lexicalist morphology (words enter the syntactic derivation complete with their inflectional endings) and neither a determiner phrase nor a tense phrase is projected in the syntax. Any XP can in principle be extended by the addition of discourse functional projections, namely focus and various types of topic or topic-like projections. So discourse functional superstructure is crosscategorial; it is not limited to the left edge of the clause. This axiom is unequivocally required by the philological data. Preverbal arguments are born postverbally in their lexical projection in the verb phrase and raise to projections higher in the tree, leaving behind traces that are semantically interpreted as free variables. The argument raising process can be understood literally or metaphorically. Lambda abstraction over the free variables enters the arguments into the derivation. Our derivations take the form of semantic analysis trees derived from syntactic trees by minor adjustments; in this sense they represent a sort of logical form. Most of the time, but not always, we add syntactic labels to the nodes. Semantic operators can instantiate functional heads (this is often the case for abstr), or they can appear on a semantic node that has no syntactic counterpart. We sometimes compact a tree by skipping over a functional head or an operator branch.
Semantics for free word order
In this section we will look at some of the currently available compositional systems for free word order languages to see if any of them could be used for Latin “off the shelf.” Consider again the six sentences in (1): they all say ‘a killed b,’ and each one has a different word order. Using free variables to generalize over the
different arguments, we can represent all of these sentences as V(x,y), where (x,y) represents ordered pairs of arguments, two individuals in the relation ‘x kills y.’ What sort of compositional mechanisms do we need to derive the correct meaning for the sentences, and how does the compositional process relate to the syntax? In predicate logic, the answer to these questions is actually quite simple. The arguments compose as an ordered tuple (in this case a pair) in one fell swoop: the pair of individuals Rabirius (x) and Saturninus (y) is a member of the set of pairs of individuals <x,y> in the relation ‘x kills y.’ This would correspond to a flat syntax, a tree with three sister nodes each one of which could host S, O or V. (A binary branching structure would be theoretically possible for the verbperipheral orders in an associative compositional system, if the arguments were paired into a constituent corresponding to the product type <x,y>, [SO] and, with permutation, [OS].) If the various linear orders of the words convey differences of pragmatic meaning, they obviously do not do so structurally on the ternary tree analysis, because there is only one structure. Nor can differences of pragmatic meaning be associated with different derivational steps, because there is only one step in the compositional process. All this is less than satisfactory: the default order in Latin is SOV, not OSV; if SOV is privileged, it ought to be structurally different from the other orders. But in a flat ternary branching structure, all the orders have the same structure, namely none. For there to be a structural difference, the trees have to be binary branching rather than ternary. It follows that the arguments are composed one by one, with the object being composed before the subject in the default order. So the semantics for the sentence in the default order is not the uncurried expression l<x,y>.Occidit(x,y), but its familiar curried counterpart, in which the order of the lambda operators sets the order of argument composition; the latter in turn ultimately depends mainly on the subevent structure with which the verb is associated. Then the verb occido gets a lexically assigned type <e,et> (rather than <ee,t>) and a corresponding lambda expression lylx.Occid-(x,y): Rabirius is a member of the set of killers of Saturninus. As already noted, grammatical relations are encoded by the order of the variables in the lexical definition of the verb and, in common practice, additionally by the alphabetic order of the letters representing the variables (although in principle any letters could be used for the variables: (y,x) or (u,v) would do as well). So the order of composition of (x,y,z) where, for instance, x is the subject, y is the object and z a directional locative, is z,y,x.
As already noted, functional application is bidirectional: the types of the elements to be composed determine which is the functor and which is the argument, and the argument may be either to the left or to the right of the functor in the syntax: unlike syntax, meaning is not directional. For instance both [VO] and [OV] combine by functional application to give a single result: both occidit Saturninum and Saturninum occidit compose into the meaning lx.Occidit(x,Saturninum). While this neutralizes some of the word order variability, if we are going to account for all the different word orders, we will still be stuck with some sentences in which the object is closer to the verb than the
subject, and others in which the subject is closer to the verb than the object (after resolution of the ambiguous verb medial ones). The meaning of the latter class cannot be derived from a lambda expression which requires the object to be composed first: these sentences would just be uninterpretable. To get the object to compose before the subject you would need some sort of wrapping rule to move the subject out of the way, but that is a questionable technical mechanism which produces the desired result by brute force without explaining why it is needed in the first place.
Categorial grammar has used a variety of strategies for dealing with free word order. The vertical slash is used for directional insensitivity, which helps with the syntax, but, as just noted, this is already built into the semantics. Various typeshifting operations are allowed, for instance raising the subject from the type of an individual <e> to the type of a generalized quantifier <et,t>, and then extending the modes of composition by including logically fancy mechanisms like function composition in addition to the basic mechanism of functional application. Take an example like the following
(2) Didium Veranius excepit (Agr 14.3).
The subject Veranius is raised to type <et,t>, lP.P(Veranius), which then composes with the verb of type <e,et>, ly lx.Excepit(x,y), by functional composition to give an expression of type <e,t>, ly.Excepit(Veranius,y); the latter can then take the direct object Didium of type <e> to give a truth value. This analysis does capture the correct pragmatic structure: Didium is topicalized given information and the following [SV] is the comment, so a constituent of pragmatic meaning. Functional composition here is the nonrepresentational analogue of abstraction over a free variable. Typeraising and functional composition are also used for VSO orders like the fifth example in (1) Interfecit Opimius Gracchum (De Or 2.132). If Opimius is raised to type <et,t> and Gracchum to type <<e,et><e,t>>, then they can be combined by functional composition to produce the type <<e,et>,t>, lR.R(Opimius,Gracchum), which takes the verb to give a truth value. This mechanism is also suitable for nonconstituent coordination and left node raising
(3) Debere enim se ait... alius consulatum, alius sacerdotium, alius provinciam. (De Ben 1.5.1).
An alternative and simpler solution available in categorial grammar is to make the compositional process sensitive to the order of the words in the syntax (while retaining the information expressed by the grammatical relations and encoded by the inflectional endings). These objectives are achieved just by changing the definition of the lambda operators from an ordered sequence (ly lx) to an unordered set l{x,y} (with alphabetic order encoding grammatical relations, so that
(2) Veranius succeeded Didius (Agr 14.3).
(3) For one man says that he is indebted for the consulship, another for the priesthood, another for his province (De Ben 1.5.1).
x is understood to be the subject and y the object). Then the argument positions can be saturated in any order.
Linking semantics is a family of theories that match argument variables with case or grammatical relation. While these theories differ considerably in their formal apparatus, at least to some extent their differences are notational variants, and they aim for fundamentally the same result. In socalled easy linking semantics the argument variables are indexed with a case label, for instance x Nom ,x Acc; one could just as well use grammatical relations (x Subj ,x Obj) or semantic roles (x Agt ,x Pat). The verb enters the derivation with case-indexed free variables rather than unindexed lambda-bound variables. The noun phrases come from the syntax with their own case labels encoded by the inflectional endings; these semantically activate the corresponding case-indexed variable by triggering lambda abstraction. The result is that the arguments are composed in whatever order they become syntactically available. So going back to our Tacitus example (Didium Veranius excepit), the verb excepit would enter the derivation as Excepit(x Nom ,x Acc). The verb composes first with the syntactically adjacent subject phrase <Veranius,Nom>: lx Nom[Excepit(x Nom ,x Acc)](Veranius). This produces Excepit(Veranius, x Acc), which then composes with the object phrase Didium in the same way. Different word orders use different intermediate expressions to end up with the same ultimate truth value. lx Acc.Excepit (Veranius, x Acc), for instance, is semantically, though not grammatically, passive; it expresses the property of being succeeded by Veranius. Other linking systems work with a slightly different reinterpretation of the notion of verbal argument. Each argument ceases to be simply an individual and is now interpreted as a pair consisting of a grammatical relation (or semantic role or case) and an individual variable: <Subject, x>, <Object,y>. The denotation of a transitive verb is no longer a set of ordered pairs of individuals {<a,b>, <c,d>, etc.}, but a set of sets of two argument-individual pairs taken in any order {[Subject,a; Object,b], [Subject,c; Object,d], etc.}. The denotation of a ditransitive verb is no longer a set of ordered triples of individuals but a set of sets of three argumentindividual pairs taken in any order. The verb is linked to its arguments by partial functions which pair individuals with their semantic roles. Every set of argument-individual pairs is the result of a different assignment function. The meaning of a verb is then defined as a linking structure which is itself a function from assignment functions to truth values. So in our Tacitus example the meaning of excepit is lf.Excepit(f ), where f is an assignment function. Composition modifies the verb meaning by adding the relevant pair to the role assignment; the order in which arguments are composed depends on the order in which they are presented by the syntax. To prevent a verb from combining with more than one instance of a given argument, when the composition saturates an argument role the assignment function is modified to remove the saturated argument from its domain. For Veranius excepit we have: lx lf [Excepit(f + [Subject, x])], where f is the same assignment as f in the linking structure above, except that it does not have the subject argument in its domain.
The socalled neodavidsonian theory of argument semantics is widely used in one form or another throughout the semantic literature. It might seem particularly suited to free word order. In this theory the order of argument composition is in principle free, because arguments are treated as permutable modifiers. A transitive verb like occido is not a relation between two individuals but a oneplace predicate over events: le.Occid-(e), type <e,t> (where e is the type for events). The participants are introduced not as arguments of the verb but as arguments of a semantic role relation between an event and an individual: le.Occidit(e) Agent(e,Rabirius) Patient(e,Saturninus), the set of killing events where the agent of the event was Rabirius and the patient of the event was Saturninus. If the conjunction is taken to be dynamic and noncommutative, then the order of the participant role relations is relevant. But so long as the conjunction retains its normal commutative (symmetric) character, the order in which the modifiers are composed is irrelevant; there is nothing in the formalism itself that would privilege one particular order or make other orders impossible to interpret (or produce the wrong results, so long as the semantic roles are morphologically identified). The theory treats arguments in the same way as it treats permutable adverbial modifiers: this conjunctive semantics is hard to reconcile with a lexically prescribed order of functional application. So the arguments can enter the derivation in whatever order they are offered up by the syntax, and composition proceeds by progressive intersection of sets of events terminating in existential closure.
We have sketched a number of different semantic mechanisms that have been suggested for resolving the problem of free word order. Typeraising and functional composition extend the flexibility of the compositional rules. Apart from these, the typical strategy is (or at least gives the impression of being) to adjust or operate on the lexical definition of the verb, for instance by relaxing the prescribed order in which the argument positions are to be saturated, or by using grammatical relations in a linking structure that lexically defines the verb, or by reinterpreting the arguments of the verb as arguments of thematic role relations. In our opinion these mechanisms are by and large unsuitable for Latin. More generally, when confronted with the problem of free word order, we have three options available to us: (1) Ignore the syntax, (2) Adjust the logic, and (3) Adjust the syntax. The first option (not examined above) has various incarnations. One is to argue that, since the ultimate semantic translation of the sentence is the same irrespective of the order in which the arguments are saturated, the order of lambda conversion is irrelevant. This has nothing to say about word order regularities in Latin or crosslinguistically; the two lambda expressions could reduce to a single semantic meaning while still having two distinct pragmatic meanings reflecting their different information structures. Another idea is to reconstruct all the arguments back into their base positions and use the resulting structure for the semantics, or to treat free word order as postsyntactic (prosodic) or ‘phenogrammatic.’ These frameworks tend, to varying degrees, to make the semantics independent of the syntax rather than derived from it. The second option, that
of adjusting the logic, is the one chosen by the various compositional systems we have just surveyed. Given the fundamental importance of constituency in syntax, it is reasonable to ban the structural rules of permutation and associativity (both of which destroy constituency). But in response to free word order, the ban on these rules can be partially or fully lifted. A lexical entry of the form lzlylx.R(x,y,z) is allowed to permute into lxlylz / lylxlz / lzlxly / lzlylx / lxlzly / lylzlx.R(x,y,z), more compactly Perm R(x,y,z), with the case inflections doing the work that would otherwise be done by fixed argument order. Permutation closure of the serial order in the syntax leads to permutation closure of the order of composition in the semantics. In addition to its potential for uncurrying arguments and creating product argument pairs, associativity can reshape constituency, for instance flipping NP[V NP] into [NP V]NP, with a corresponding shift in the order of semantic composition. In this book we shall explore the third option, leaving the logic the way it is and adjusting the syntax. Here’s why we chose this option. Free word order in Latin is used to express pragmatic meaning: it is not clear why one would want to adjust or reinterpret semantic (lexical) meaning in order to account for pragmatic meaning. For instance, you wouldn’t want a separate lexical definition for a verb with a topicalized or scrambled direct object: these are syntactic processes designed to encode pragmatic meaning by adding structure. In fact, while Latin word order is free in a grammatical (semantic) perspective, it is quite stable in a pragmatic perspective; in that sense there really isn’t a free word order problem at all, although there is room for disagreement about the details of the structures involved and the contribution of prosody. Rather what we need to do is to find compositional mechanisms to express the various pragmatic meanings. The same issue presents itself in a parallel way in the syntax, where it is usual to distinguish between a downstairs layer which is lexically oriented (VP), and two upstairs layers, one referentially oriented (IP or TP), and one operator oriented (CP). The arguments of the Latin verb live upstairs, and they should be semantically interpreted where we encounter them in the surface syntax; there is no reason to think that they are reconstructed back into the verb phrase for semantic interpretation. Adjusting the lexical meaning of the verb looks like a downstairs solution to an upstairs problem. (Even syntactic theories that work with free order base generation assume movement to higher positions at logical form.) Finally, Latin does not have free word order in all sentences: sentences with broad scope focus have a regular default fixed order (which we will analyze in Chapter 1). If the compositional order is intrinsically free, as it is in the systems we have just reviewed, where does the default fixed word order come from? It looks like those systems have solved one problem by creating another.
Semantics for information structure
We will start by quickly running through the descriptive terminology of information structure as we use it in this book. We will not attempt to define or fully characterize the various terms: more discussion and exemplification can be