"Alphabet"? "Syllabary"? How to categorise sign language writing systems? A Proposal.

"Alphabet"? "Syllabary"? How to Categorise Sign Language Writing Systems? A Proposal

The following article will assume the reader has a familiarity with sign languages, Deaf culture and sign language writing systems. Sources will be given where available. All of the sign language writing systems covered should be available in Zrajm's page (link below).

I am in a weird nerd corner of the internet where we talk about sign language writing systems quite a bit. These are orthographies or notation systems used or proposed to write sign languages. 

Another nerd in this space is collating a bunch of these sytems into a big page containing as many as possible - Sign Language Writing (© 2025 Zrajm) (WIP). My discussions and perusal of this list has lead me to some thoughts about how to categorise said systems.

Looking at the above list, at time of writing (09/2025)  gives the following chart;

This is a mixture of two separate scales - Linearity and Unicode Compatibility. I also believe there are other useful metrics by which 

Glyph/Grapheme

Before we start, I want to clarify some terminology. In linguistics, the term "glyph" is very flexible - Glyph. Thus I want to clarify;
  • Glyph - Individual marks, representing the smallest level of differentiation in the system in question. (aka "mark" or "radical"). Whitten between | | brackets.
  • Grapheme - A whole unit, including all constituent glyphs (aka "character" or "symbol"). Written between ⟨ ⟩.
To demonstrate, the grapheme ⟨à⟩ in French is made of two glyphs, |a| and |`|.

Linearity

Linearity is not binary - and there are degrees between fully Linear and fully Non Linear.

These categories are not strict, there may also be other categories. But they matter for how a script is presented. 
  • Linear - Graphemes are almost always single glyphs composed in a single direction.

    • This can be right to left, left-right, up-down, down-up or boustrophedon (alternating each line).
    • This category should be taken to mean "mostly linear" - where most glyphs cannot take diacritics. A script with a limited amount of diacritics should still be considered linear.
    • Not all graphemes/glyphs have to be aligned, they may vary in size and placement (e.g. Capital vs lower-case, subscript/superscript).
    • Oral Ortho Examples
      • English (Latin)
      • Greek
      • Russian (Cyrillic)
      • Japanese (Katakana, Hiragana)
    • Sign Ortho Examples
      • HamNoSys
      • ELiS
  • Block - Graphemes are composed of multiple glyphs in a consistent pattern or set of patterns.
    • Oral Ortho Examples
      • Korean (Hangul)
    • Sign Ortho Examples
      • Stokoe Notation
  • Diacritical - Graphemes are composed of a primary base glyph, with secondary diacritical glyphs around it. 
    • This category should be taken to mean "heavily diacritical" - meaning that all/most base glyphs can or do take diacritics.
    • Scripts where these modifications are optional are still Daicritical.
    • Oral Ortho Examples
      • Arabic (Abjad)
      • Hebrew (Alefbet ivri)
      • Various Abugidas 
    • Sign Ortho Examples
      • Stokoe Notation System
  • Non-Linear - Graphemes are composed of glyphs in complicated arrangements.
    • Oral Ortho Examples
      • CJK characters (Chinese Hanzi, Japanese Kanji etc)
    • Sign Ortho Examples
      • ASLwrite
      • (Sutton) SignWriting
All of the above terms refer primarily to the construction within a single grapheme. While most writing systems (including Non-Linear systems) have graphemes travel in a specific direction in a linear fashion on the page - the composition within a single grapheme can be much more complicated.

There can also be edge-cases (such as Canadian Aboriginal syllabics) - in which the categorisation could be debated. 

Image Sources (not previously referenced);
This matters for - reading, writing and typing.

The blue line shows an overall direction of reading or writing. Notice how the further down the list we go - the less regular it becomes, until each grapheme is registered individually in a completely unique way.

Linear scripts are easiest to type - be that on typewriter or on modern devices. Non-Linear on the other hand are difficult to type, as each grapheme/character must be given its own unique entry. In older technology, this meant that each of the thousands of characters of Chinese had to be an individual physical block for printing - but with modern technology, Chinese still has individual unicode entries for each individual character.

Block and Diacritical systems have ways of making them typeable, either (1) by adding the diacritics / block-placed glyphs as separate glyphs or (2) by displaying the whole graphemes separately in the correct positions relative to one another. Both of these are more software dependant than Linear systems - but require less initial set-up than Non-Linear systems.

However Non-Linear systems provides a lot of freedom and flexibility. This allows for logographic scripts far better than most other options. These two factors will become important later.

Examples of each in sign languages;

Linear: Hamburg Notation System or HamNoSys - is probably the posterchild of Linear sign language writing systems. While there are some use of diacritics - it attempts to tease out each individual phoneme, and even sub-phoneme, of a sign into an individual character.

Block/Diacritical: While examples in this middle ground are lacking - the most famous example is probably Stokoe Notation System. It regularly uses sub-script and super-script glyphs, as well as sometimes glyphs placed above or below others. Whether this makes it a Block or Diacritical script is open to debate. Attempts to linearise it often produces a very different looking script.

Non-Linear: (Sutton) SignWriting and ASLwrite. SignWriting has a number of internal rules about how glyphs are arranged - but programs for compositing them allow freeform placement in a 2D area. ASLwrite aims to be a "natural" / "go with the flow" system - allowing the writer to make many on-the-fly decisions about internal character placement. Both are key examples of systems which use multiple glyphs in complicated arrangements to produce individual signs as individual graphemes - thus making them Non-Linear.

Ways of typing both SignWriting and ASLwrite have been or are being developed - but both are either software heavy or change the nature of the system to be more linear.

I would argue the reason for the relative popularity of Non-Linear systems in sign languages is that the flexibility allows for;
  1. More condensed written words, as opposed to a simple sign taking up half a page.
  2. More intuitive graphemes, where a the word as written actually looks like the sign as signed.
Zrajm uses the term "projectional" for these systems;

"Projectional—A writing systems that projects the three dimensions of signing space onto the two dimensional writing surface is said to be projectional. This is true of, for example, ASLwrite, early & late Si5s, SignWriting, Visagrafía, and VisoGrafia. Antonym: linear."

I like the term, however I don't think it accounts for non-linear spoken writing systems, nor does it account for all forms of non-linear system possible. I would consider it a category of Non-Linear system.

But this makes digitising these systems a nightmare, representing a barrier to their widescale adoption.

Nuance Window: Whether or not CJK characters are "non-linear" could be a matter of a lot of debate. The internal components of said characters, called "radicals" do have rules as to how they are arranged - and externally the system is linear on a page. But there are plenty of radical placements that follow rules far too complex to be easily programmed into a linear printing machine, and thus face the same issues that non-linear writing systems for sign languages face. Thus it is useful to classify them as non-linear for the purpose of this article. 

Also, here are Korean Syllable blocks if you are interested:

Unicode Compatibility

Unicode is "a character encoding standard maintained by the Unicode Consortium designed to support the use of text in all of the world's writing systems that can be digitized". It contains 159,801 characters.

Put in simple terms, if a system uses pre-existent symbols that are used in standard written communication in spoken languages, punctuation or mathematical symbols - then it is unicode compatible.

This means that most major devices will be able to render it, although not all devices and not all fonts contain all unicode characters.

Most systems are either completely compatible or not compatible at all - those that aren't use completely unique symbols for their glyphs. Some systems fall into a bizarre in-between - using both extant Unicode symbols and also their own.

Some non-compatible entries attempt to become compatible either by (A) using the Private Use Areas or (B) fonts use regular Unicode characters displayed as their own novel characters.

Thus I propose a four-way distinction;
  1. Incompatible (e.g. ASLwrite) - incompatible and incapable of becoming.
  2. Semi-Compatible (e.g Dimskis Notation) - a mix of compatible and incompatible characters
  3. Compatibilised (e.g. ASLfont) - made compatible via the above methods.
  4. Compatible (e.g. Stokoe) - pre-extant compatible with no additional steps
Image sources (not previously referenced):
Unicode is also sometimes updated and new symbols/characters/graphemes are added - via this mechanism a a previously Incompatible/Semi-Compatible system can jump straight to Compatible.

Once again, steps have been / are being taken to compatibilise examples such as ASLwrite, however they necessarily adjust the system to do so. Most systems could be moved from Incompatible and Semi-Compatible to Compatiblised with sufficient effort - but the the classification recognises the current status of the system without any adaptations.

Beyond just whether the glyphs appear in unicode or have fonts - projection based non-linear writing systems have more hurdles to compatibility than more linear systems do - either requiring specialist software to display accurately or encoding every single grapheme separately as a character/symbol within unicode.

Classification

 This is primarily what I wanted to write this article about.

In sources, such as Zrajm, the terms "alphabet" and "logography" are used to refer to different systems - but rarely do I see this grounded in an explanation.

A useful source is; List of writing systems - but the same breakdown as presented there will not necessarily be being used.

  • Alphabet - "A writing system that uses a standard set of symbols called letters to represent particular sounds in a spoken language. Specifically, letters largely correspond to phonemes as the smallest sound segments that can distinguish one word from another in a given language."
  • Syllabary - "A set of written symbols that represent the syllables or (more frequently) morae which make up words. A symbol in a syllabary, called a syllabogram, typically represents an (optional) consonant sound (simple onset) followed by a vowel sound (nucleus)—that is, a CV (consonant+vowel) or V syllable—but other phonographic mappings, such as CVC, CV- tone, and C (normally nasals at the end of syllables), are also found in syllabaries."
  • Abugida / Alphasyllabary - "A segmental writing system in which consonant–vowel sequences are written as units; each unit is based on a consonant letter, and vowel notation is secondary, similar to a diacritical mark."
  • Abjad - "A writing system in which only consonants are represented by letter signs, leaving the vowel sounds to be inferred by the reader (unless represented otherwise, such as by diacritics)."
  • Logogram - "A written character that represents a semantic component of a language, such as a word or morpheme."
    • Pictogram - "A graphical symbol that conveys meaning through its visual resemblance to a physical object."
    • Ideogram - "A symbol that is used within a given writing system to represent an idea or concept in a given language."
Before continuing - it should be noted that very few, if any, true logographies exist for spoken languages. Most rely partially on logograms, but also partially on abstract sound correspondences such as the rebus principle - i.e. "sounds like [pictograph]". These systems can still be thought of as logographies, as their primary method of representing words is via meaning pictograms -  

"An example that illustrates the Rebus principle is the representation of the sentence "I can see you" by using the pictographs of [👁️🥫🌊🐑]" 

Logograms are usually differentiated from pictograms and ideograms as pictograms represent objects, idiograms represent concepts, and logograms represent individual morphemes or whole words. Additionally logograms can be abstract, requiring learnt association than direct depictions. Further breakdowns and examples of logographies (and ideograpies) can be found here; Proto-writing and ideographic systems and here; Logographic systems.

Furthermore, systems like Japanese use a mixture of Kanji (logographs), hiragana and katakana (both syllabaries) - proves that mixtures of the above systems are possible. There are also cases where the categorisation of a specific system is disputed.

This begs the question, what is a phoneme of a sign language? What are the equivalents of consonants and vowels?
This is the most popular way to analyse sign language phonology - the 5 parameter model. It is also the way that most phonetic sign language writing systems are structured.

So what is a sign language syllable?
I have encountered various definitions of sign language syllable over time, including ones which relate primarily to how motion is conducted (like beats, one beat being one syllable) - but here I will perhaps use an unusual analysis (although loosely in-line with the first source cited above).

In spoken languages there are, broadly, two types of sound - consonants and vowels. Usually, vowels form the nucleus of a syllable, and consonants can form the starts and ends of them (morae and syllabic consonants also exist but are not the topic of discussion). 

Syllabaries usually provide separate, sometimes related, glyphs for these consonant-vowel combinations (e.g. Japanese katakana "カ" = "ka", "キ" = "ki"). As such, for the purposes of sign language orthographies, I suggest that a sign language syllabogram be those that merge two or more parameters into one glyph. 

This allows us to recognise the following systems;
  1. Alphabet - A system that writes each parameter as separate glyphs. This includes any system that writes multiple glyphs per parameter.
    • HamNoSysSeparates out each parameter, only uses minimal modification.
  2. Abugida - A system that writes some parameters as primary/nucleic glyphs, and writes other parameters as modifiers/diacritics on those nucleic glyphs.
    • Stokoe - Uses the handshape as a nucleus, and adds subscript / superscript / diacritic markers around it.
  3. Syllabary - A system that writes single glyphs that simultaneously represent two or more glyphs.
    • (Sutton) SignWritingCombines handshape and orientation, with unique glyphs for each combination.
  4. Abjad - A system that writes only some parameters, and drops others. 
    • ASLwriteEncourages writing only what is necessary to understand the sign well enough. Encourages dropping of extraneous elements and presenting a sign in the simplest manner possible while remaining readable.
  5. Logography - A system that writes the meaning of signs, and does represent the parameters.
    • Hand Talk Pictographs - Represents the meaning of signs, likely by using the iconicity of said signs. This system is partially lost and not fully understood, thus take this classification with a grain of salt.
An individual system may be more than one of these at once, although should be primarily categorised based on its most prevalent element. In cases where two or more classifications fit equally - or separate parts of the same system do two or more - it can be categorised as a mixed system like Japanese.

Logographies need not be fully logographic in order to qualify as such. Pictograms, ideograms, rebus and abstract depiction of signs can all be used. If the system makes extensive use of a system to phonetically write signs alongside logograms, then it can be classified as a mixed system. 

Notably, this should largely be considered separate from linearity. While they are entwined, it is possible to have unexpected combinations such as a Block Alphabet - as shown by the example of Hangul (Korean). 

Featurality

One final axis on which to classify sign language writing systems is as Featural. Once again the main comparison point is Korean Hangul, where the letters broadly resemble shapes of the mouth when pronouncing sounds.


Source: 43 Interesting, Fun, Cool Facts About Korean Language - CareerCliff
Other systems have been pointed out as featural too - with not all being depicting the mouth directly. Instead featurality is based on the fact that similar sounds share similar letters, such as Canadian Aboriginal Syllabics - which rotates and adds dots to form similar but distinct letter forms per syllable.




As such I would like to propose 4 classifications;

  1. Depictive - The glyphs resemble or depict parameters in a way intended to be visually clear.
    • ASLwrite
    • SignWriting
  2. Abstracted - The glyphs share correspondence across similar parameters, but resemblance to the parameter either not present obscured.
    1. ELiS
    2. Visagrafía
    3. HamNoSys
  3. Mixed - A mix of featural and arbitrary.
    1. Dimskis Notation
  4. Arbitrary - The glyphs do not resemble or depict parameters
    1. Stokoe Notation
Image sources (not previously referenced):
It is interesting to note that featural systems are far more frequent in sign languages than spoken languages.

The majority phonetic writing systems (alphabets, abugidas, abjads and syllabries) for spoken languages evolved from logographies via the use of the rebus principle (i.e. "sounds like Ox") and then those characters became simplified with time. Thus they had no reason nor opportunity to be featural. It is only few systems such as Hangul (Korean), that had a distinct moment of artificial creation, which employ this tactic.

Sign languages writing systems, on the other hand, are all-bar-one (Handtalk Pictographs) artificial inventions. Additionally, as visual languages - they offer an easier way to depict them featurally - simply by drawing the hands, body-parts and lines of motion involved in a sign. To depict sounds is harder, as you either have to reach for abstract representation, or depict the mouth/vocal tract.

Comments

Popular posts from this blog

nasin toki pi luka pona: open

Native Speakers pi toki pona li lon ala lon?

luka pona li seme?