Here's a list of my academic outputs, chronologically sorted:

Thesis

Conference & Workshop Proceedings

  1. Nominal Class Assignment in Swahili: A Computational Account. Giada Palmieri and Konstantinos Kogkalidis. Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024). October 2024.
    [ ] [ paper ]

    We discuss the open question of the relation between semantics and nominal class assignment in Swahili. We approach the problem from a computational perspective, aiming first to quantify the extent of this relation, and then to explicate its nature, taking extra care to suppress morphosyntactic confounds. Our results are the first of their kind, providing a quantitative evaluation of the semantic cohesion of each nominal class, as well as a nuanced taxonomic description of its semantic content.

  2. Learning Structure-Aware Representations of Dependent Types. Konstantinos Kogkalidis, Orestis Melkonian, Jean-Philippe Bernardy. Proceedings of the 38th Conference on Neural Information Processing Systems (NeurIPS 2024, to appear). December 2023.
    [ ] [ paper ] [ code ]

    Agda is a dependently-typed programming language and a proof assistant, pivotal in proof formalization and programming language theory. This paper extends the Agda ecosystem into machine learning territory, and, vice versa, makes Agda-related resources available to machine learning practitioners. We introduce and release a novel dataset of Agda program-proofs that is elaborate and extensive enough to support various machine learning applications – the first of its kind. Leveraging the dataset’s ultra-high resolution, detailing proof states at the sub-type level, we propose a novel neural architecture targeted at faithfully representing dependently-typed programs on the basis of structural rather than nominal principles. We instantiate and evaluate our architecture in a premise selection setup, where it achieves strong initial results.

  3. Algebraic Positional Encodings. Konstantinos Kogkalidis, Jean-Philippe Bernardy, Vikas Garg. Proceedings of the 38th Conference on Neural Information Processing Systems (NeurIPS 2024, to appear). December 2023.
    [ ] [ paper ] [ code ]

    We introduce a novel positional encoding strategy for Transformer-style models, addressing the shortcomings of existing, often ad hoc, approaches. Our framework provides a flexible mapping from the algebraic specification of a domain to an interpretation as orthogonal operators. This design preserves the algebraic characteristics of the source domain, ensuring that the model upholds the desired structural properties. Our scheme can accommodate various structures, including sequences, grids and trees, as well as their compositions. We conduct a series of experiments to demonstrate the practical applicability of our approach. Results suggest performance on par with or surpassing the current state-of-the-art, without hyperparameter optimizations or ``task search’’ of any kind. Code will be made available at https://github.com/konstantinosKokos/unitaryPE.

  4. OYXOY: A Modern NLP Test Suite for Modern Greek. Konstantinos Kogkalidis, Stergios Chatzikyriakidis, Eirini Chrysovalantou Giannikouri, Vassiliki Katsouli, Christina Klironomou, Christina Koula, Dimitris Papadakis, Thelka Pasparaki, Erofili Psaltaki, Efthymia Sakellariou and Hara Soupiona. Findings of the Association for Computational Linguistics: EACL 2023. September 2023.
    [ ] [ paper ] [ code ]

    This paper serves as a foundational step towards the development of a linguistically motivated and technically relevant evaluation suite for Greek NLP. We initiate this endeavor by introducing four expert-verified evaluation tasks, specifically targeted at natural language inference, word sense disambiguation (through example comparison or sense selection) and metaphor detection. More than language-adapted replicas of existing tasks, we contribute two innovations which will resonate with the broader resource and evaluation community. Firstly, our inference dataset is the first of its kind, marking not just one, but rather all possible inference labels, accounting for possible shifts due to e.g. ambiguity or polysemy. Secondly, we demonstrate a cost-efficient method to obtain datasets for under-resourced languages. Using ChatGPT as a language-neutral parser, we transform the Dictionary of Standard Modern Greek into a structured format, from which we derive the other three tasks through simple projections. Alongside each task, we conduct experiments using currently available state of the art machinery. Our experimental baselines affirm the challenging nature of our tasks and highlight the need for expedited progress in order for the Greek NLP ecosystem to keep pace with contemporary mainstream research.

  5. SPINDLE: Spinning Raw Text into Lambda Terms with Graph Attention. Konstantinos Kogkalidis, Michael Moortgat and Richard Moot. Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations. March 2023.
    [ ] [ paper ] [ code ]

    This paper describes SPINDLE, an open source Python module, providing an efficient and accurate parser for written Dutch that transforms raw text input to programs for meaning composition expressed as λ terms. The parser integrates a number of breakthrough advances made in recent years. Its output consists of hi-res derivations of a multimodal type-logical grammar, capturing two orthogonal axes of syntax, namely deep function-argument structures and dependency relations. These are produced by three interdependent systems: a static type-checker asserting the well-formedness of grammatical analyses, a state-of-the-art, structurally-aware supertagger based on heterogeneous graph convolutions, and a massively parallel proof search component based on Sinkhorn iterations. Packed in the software are also handy utilities and extras for proof visualization and inference, intended to facilitate end-user utilization.

  6. Diamonds Are Forever -- Theoretical and Empirical Support for a Dependency-Enhanced Type Logic. Michael Moortgat, Konstantinos Kogkalidis and Gijs Wijnholds. Logic and Algorithms in Computational Linguistics 2021. March 2023.
    [ ] [ paper ] [ code ]

    Extended Lambek calculi enlarge the type language with adjoint pairs of unary modalities. In previous work, modalities have been used as licensors for controlled forms of restructuring, reordering and copying. Here, we study a complementary use of the modalities as dependency features coding for grammatical roles. The result is a multidimensional type logic simultaneously inducing dependency and function argument structure on the linguistic material. We discuss the new perspective on constituent structure suggested by the dependency-enhanced type logic, and we experimentally evaluate how well a neural language model like BERT can deal with the subtle interplay between logical and structural reasoning that this type logic gives rise to.

  7. Geometry-Aware Supertagging with Heterogeneous Dynamic Convolutions. Konstantinos Kogkalidis and Michael Moortgat. Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD). May 2022.
    [ ] [ paper ] [ code ]

    The syntactic categories of categorial grammar formalisms are structured units made of smaller, indivisible primitives, bound together by the underlying grammar’s category formation rules. In the trending approach of constructive supertagging, neural models are increasingly made aware of the internal category structure, which in turn enables them to more reliably predict rare and out-of-vocabulary categories, with significant implications for grammars previously deemed too complex to find practical use. In this work, we revisit constructive supertagging from a graph-theoretic perspective, and propose a framework based on heterogeneous dynamic graph convolutions aimed at exploiting the distinctive structure of a supertagger’s output space. We test our approach on a number of categorial grammar datasets spanning different languages and grammar formalisms, achieving substantial improvements over previous state of the art scores.

  8. Discontinuous Constituency and BERT: A Case Study of Dutch. Konstantinos Kogkalidis and Gijs Wijnholds. Findings of the Association for Computational Linguistics: ACL 2022. May 2022.
    [ ] [ paper ] [ code ]

    In this paper, we set out to quantify the syntactic capacity of BERT in the evaluation regime of non-context free patterns, as occurring in Dutch. We devise a test suite based on a mildly context-sensitive formalism, from which we derive grammars that capture the linguistic phenomena of control verb nesting and verb raising. The grammars, paired with a small lexicon, provide us with a large collection of naturalistic utterances, annotated with verb-subject pairings, that serve as the evaluation test bed for an attention-based span selection probe. Our results, backed by extensive analysis, suggest that the models investigated fail in the implicit acquisition of the dependencies examined.

  9. A Logic-Based Framework for Natural Language Inference in Dutch. Lasha Abzianidze and Konstantinos Kogkalidis. Computational Linguistics in the Netherlands. February 2022.
    [ ] [ paper ] [ code ]

    We present a framework for deriving inference relations between Dutch sentence pairs. The proposed framework relies on logic-based reasoning to produce inspectable proofs leading up to inference labels; its judgements are therefore transparent and formally verifiable. At its core, the system is powered by two λ-calculi, used as syntactic and semantic theories, respectively. Sentences are first converted to syntactic proofs and terms of the linear λ-calculus using a choice of two parsers: an Alpino-based pipeline, or Neural Proof Nets. The syntactic terms are then converted to semantic terms of the simply typed λ-calculus, via a set of hand designed type- and term-level transformations. Pairs of semantic terms are then fed to an automated theorem prover for natural logic which reasons with them while using the lexical relations found in the Open Dutch WordNet. We evaluate the reasoning pipeline on the recently created Dutch natural language inference dataset, and achieve promising results, remaining only within a 1.1–3.2% performance margin to strong neural baselines. To the best of our knowledge, the reasoning pipeline is the first logic-based system for Dutch.

  10. Fighting the COVID-19 Infodemic with a Holistic BERT Ensemble. Giorgos Tziafas, Konstantinos Kogkalidis and Tommaso Caselli. Proceedings of the Fourth Workshop on NLP for Internet Freedom: Censorship, Disinformation, and Propaganda. June 2021.
    [ ] [ paper ] [ code ]

    This paper describes the TOKOFOU system, an ensemble model for misinformation detection tasks based on six different transformer-based pre-trained encoders, implemented in the context of the COVID-19 Infodemic Shared Task for English. We fine tune each model on each of the task’s questions and aggregate their prediction scores using a majority voting approach. TOKOFOU obtains an overall F1 score of 89.7%, ranking first.

  11. Improving BERT Pretraining with Syntactic Supervision. Giorgos Tziafas, Konstantinos Kogkalidis, Gijs Wijnholds and Michael Moortgat. Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD). April 2021.
    [ ] [ paper ] [ code ]

    Bidirectional masked Transformers have become the core theme in the current NLP landscape. Despite their impressive benchmarks, a recurring theme in recent research has been to question such models’ capacity for syntactic generalization. In this work, we seek to address this question by adding a supervised, token-level supertagging objective to standard unsupervised pretraining, enabling the explicit incorporation of syntactic biases into the network’s training dynamics. Our approach is straightforward to implement, induces a marginal computational overhead and is general enough to adapt to a variety of settings. We apply our methodology on Lassy Large, an automatically annotated corpus of written Dutch. Our experiments suggest that our syntax-aware model performs on par with established baselines, despite Lassy Large being one order of magnitude smaller than commonly used corpora.

  12. Neural Proof Nets. Konstantinos Kogkalidis, Michael Moortgat and Richard Moot. Proceedings of the 24th Conference on Computational Natural Language Learning. October 2020.
    [ ] [ paper ] [ code ]

    Linear logic and the linear λ-calculus have a long standing tradition in the study of natural language form and meaning. Among the proof calculi of linear logic, proof nets are of particular interest, offering an attractive geometric representation of derivations that is unburdened by the bureaucratic complications of conventional prooftheoretic formats. Building on recent advances in set-theoretic learning, we propose a neural variant of proof nets based on Sinkhorn networks, which allows us to translate parsing as the problem of extracting syntactic primitives and permuting them into alignment. Our methodology induces a batch-efficient, end-to-end differentiable architecture that actualizes a formally grounded yet highly efficient neuro-symbolic parser. We test our approach on ÆThel, a dataset of type-logical derivations for written Dutch, where it manages to correctly transcribe raw text sentences into proofs and terms of the linear λ-calculus with an accuracy of as high as 70%.

  13. Æthel: Automatically Extracted Typeogical Derivations for Dutch. Konstantinos Kogkalidis, Michael Moortgat and Richard Moot. Proceedings of the 12th Language Resources and Evaluation Conference. May 2020.
    [ ] [ paper ] [ code ]

    We present ÆTHEL, a semantic compositionality dataset for written Dutch. ÆTHEL consists of two parts. First, it contains a lexicon of supertags for about 900 000 words in context. The supertags correspond to types of the simply typed linear lambda-calculus, enhanced with dependency decorations that capture grammatical roles supplementary to function-argument structures. On the basis of these types, ÆTHEL further provides 72192 validated derivations, presented in four formats: natural-deduction and sequent-style proofs, linear logic proofnets and the associated programs (lambda terms) for meaning composition. ÆTHEL’s types and derivations are obtained by means of an extraction algorithm applied to the syntactic analyses of LASSY Small, the gold standard corpus of written Dutch. We discuss the extraction algorithm and show how `virtual elements’ in the original LASSY annotation of unbounded dependencies and coordination phenomena give rise to higher-order types. We suggest some example usecases highlighting the benefits of a type-driven approach at the syntax semantics interface. The following resources are open-sourced with {ÆTHEL: the lexical mappings between words and types, a subset of the dataset consisting of 7924 semantic parses, and the Python code that implements the extraction algorithm.

  14. Constructive Type-Logical Supertagging with Self-Attention Networks. Konstantinos Kogkalidis, Michael Moortgat and Tejaswini Deoskar. Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019). August 2019.
    [ ] [ paper ] [ code ]

    We propose a novel application of self-attention networks towards grammar induction. We present an attention-based supertagger for a refined type-logical grammar, trained on constructing types inductively. In addition to achieving a high overall type accuracy, our model is able to learn the syntax of the grammar{’}s type system along with its denotational semantics. This lifts the closed world assumption commonly made by lexicalized grammar supertaggers, greatly enhancing its generalization potential. This is evidenced both by its adequate accuracy over sparse word types and its ability to correctly construct complex types never seen during training, which, to the best of our knowledge, was as of yet unaccomplished.

  15. Towards a 2-Multiple Context-Free Grammar for the 3-Dimensional Dyck Language. Konstantinos Kogkalidis and Orestis Melkonian. At the Intersection of Language, Logic, and Information. July 2019.
    [ ] [ paper ] [ code ]

    We discuss the open problem of parsing the Dyck language of 3 symbols, D3, using a 2-Multiple Context-Free Grammar. We attempt to tackle this problem by implementing a number of novel meta-grammatical techniques and present the associated software packages we developed.

Drafts & Preprints

  1. On Tables with Numbers, with Numbers. Konstantinos Kogkalidis and Stergios Chatzikyriakidis. August 2024.
    [ ] [ paper ]

    This paper is a critical reflection on the epistemic culture of contemporary computational linguistics, framed in the context of its growing obsession with tables with numbers. We argue against tables with numbers on the basis of their epistemic irrelevance, their environmental impact, their role in enabling and exacerbating social inequalities, and their deep ties to commercial applications and profit-driven research. We substantiate our arguments with empirical evidence drawn from a meta-analysis of computational linguistics research over the last decade.

  2. Deductive Parsing with an Unbounded Type Lexicon. Konstantinos Kogkalidis, Michael Moortgat, Giorgos Tziafas and Richard Moot. August 2019.
    [ ] [ paper ]

    We present a novel deductive parsing framework for categorial type logics, modeled as the composition of two components. The first is an attention-based neural supertagger, which assigns words dependency-decorated, contextually informed linear types. It requires no predefined type lexicon, instead utilizing the type syntax to construct types inductively, enabling the use of a richer and more precise typing environment. The type annotations produced are used by the second component, a computationally efficient hybrid system that emulates the inference process of the type logic, iteratively producing a bottom-up reconstruction of the input’s derivation-proof and the associated program for compositional meaning assembly. Initial experiments yield promising results for each of the components.