Archive | March 2014

# Sudoku Prolog

I decided to work on a few algorithms I had implemented in Prolog that solve the famous sudoku puzzle. The Internet has several sudoku solvers implemented in Prolog but I decided to share some things I made during college and add a few new things to those old implementations.

# On Prolog

As of the publishing of this post I have uploaded only the first sudoku solvel that is very basic and ineficient to Github. You can find it here.

When people start learning programming they generally start with a language that has an imperative paradigm. And imperative paradigm basically means that you are giving orders to the machine, like declare this variable, call this function, jump to this part of the code if a certain condition if true, etc. We start knowing the basics of this paradigm with a single language and, with time, we are ready to move to another language. We quickly see that the features of this new language are pretty much the same as the features from the old language. There are variables, functions, cycles, maybe even classes, etc. The big differences are there but the basis of the thing is mostly the same.

But what happens when you stumble upon a paradigm that is almost completely different? Suddenly you no longer declare variables and call functions. You write rules that are either True or False according to other rules. You don’t write an algorithm that gives a solution based on an input, you write which solution is True for each input.

This is Declarative Programming. Prolog is a language that follows the Declarative Paradigm which name is an abbreviation of Programmation en Logique, French for Programming in Logic.

During graduation I had a course called Declarative Programming where we just studied this new paradigm using Prolog. We would start with small problems and small programs and moved to the most complex things. The final project during that year was an implementation of the Go board game all in Prolog. This course at first was very complex. Trying to understand how the language worked and how it solved things and learning about the unification algorithm was hard at first, but eventually I was able to think in a different way when working with Prolog and solving problems became so much easier in some contexts.

Later during graduation I had a course on Artificial Intelligence. In this course we implemented many algorithms to solve a few problems, all in Prolog. It was during this course that I was able to truly appreciate how powerful and interesting this language was. The way a person things when solving a new problem is so different and yet so much better then the imperative way.

Of course, Prolog isn’t without a few problems, which are:

• It may be very hard to make this language efficient. You may end up having the program execute millions of useless lines of code if the rules are not well written.
• It’s just not a trivial language. For most cases a person can read a tutorial on a language, library or something else and in a few hours they start understanding the thing. Understanding this new paradigm and how Prolog works is simply hard. Some people will look at it and decide that the strange language is just not worth their time.
• Prolog is barely used outside academical and educational contexts. It’s rare to find commercial applications of Prolog, it’s not that they don’t exist, it’s just rare. This makes people not wanting to learn the language or work with it outside of class.

All this, among some other problems. Despite all this I believe that it is a language worth exploring because:

• It’s just interesting. It’s paradigm makes you think in a different way. You are able to look at problems from another prespective. If that is not a good point when solving problems, then I don’t know what is.
• Problems are solved in few lines of code. You just work with a handful of rules and you are able to work with trees, graphs, matrices, and do all sorts of complex computations. You don’t need to extra lines of code that define all you data structures, just a few rules that logically describe what you want.
• Abstraction is good. Like it is said in the previous point, you don’t waste time defining data structures. You don’t really work on the level of variables and classes and objects. You work on a higher level almost an abstract level. You can concentrate on the structure of the formal models instead of thinking in the structure of the implemented one.

Given this, let’s develop in Prolog!

# Rift – Lexical Analysis

In which we ramble about Flex, scanners and regular expressions.

To put it bluntly, a compiler basically reads plain text files and generates something from them. That something may be x86 code, C code, LLVM code, even things like Python code, etc. For example, gcc reads a few files containing C code and generates a binary file ready to be executed. The first step in this complex process is called lexical analysis. To develop this part of the Rift compiler a program called flex is going to be used.

So what is lexical analysis? Lexical analysis deals with recognizing certain words and symbols in the plain text file and assigning them meaning. A plain text file is just a sequence of characters, nothing more. Lexical analysis goes through this sequence and is able to recognize which part of that sequence is a reserved word like if, while, int, which part is a string, an integer, etc. The sequence of characters then gets turned into a sequence of tokens, which is then used by the other parts of the compiler.

To illustrate this idea take the following Rift code snippet:

``````void say(char str[]) {
write(str);
}

async say("Not a Go");
say("Rip Off");
``````

For a programmer the content of the code is obvious. There is the definition of a function along with two calls for that function. One of those calls actually starts a new thread running the say function. For the compiler however, those are just characters. They are a v, an o, a space, a newline, etc. After lexical analysis this code will be a sequence of tokens. The first four characters will be recognized as the token T_VOID. The characters of the string “Rip Off” will be recognized as a T_STRING token. The string token will actually have the value of the string associated with it in some form, but more on that later.

In the end of the lexical analysis the compiler will “understand” the first line of the code like this:

``````# Original Line
void say(char str[]) {

# How the compiler looks at it during this phase
T_VOID T_ID T_PAR_O T_CHAR T_ID T_BRACK_O T_BRACK_C T_PAR_C T_BRACE_O
``````

So there is a token for the void word, the tokens for the parenthesis ( and ), which are T_PAR_O and T_PAR_O respectively and so on. We no longer see loose characters.

This process is usually combined with the parsing process, which will be discuss in a later post. This process is also known as scanner. So how do we implement a scanner for this project? Using Flex.

Flex is lexical analyser generator, which means that it is a program that generates a specific scanner for our needs. To generate this scanner we have to define a file called scanner.l, for this project the initial version of this file is in this commit at Github. To fully understand the contents of this file you should read the flex’s manual available online in this link. There really isn’t any tutorial that can explain how to make a scanner better then the manual, which is quit good.

To understand the very basics note that one of the first things the file defines is the following:

``````ID [a-zA-Z_][a-zA-Z0-9_]*

INT_B2_LIT 0b[0-1]+
INT_B8_LIT 0[0-7]+
INT_B10_LIT [0-9]+
INT_B16_LIT 0x[0-9a-fA-F]+
FLOAT_LIT ([0-9]*\.[0-9]+)|([0-9]+\.[0-9]*)

STRING_LIT \"(\\.|[^\\"])*\"
CHAR_LIT \'(\\.|[^\\'])\'

COMMENT "#".*"\n"
IGNORE [ \t\n]
``````

These are regular expressions that define how a few tokens look like. The first one defines how an identifier looks like. So basically any sequence of characters that starts with a letter or underscore and is followed by either a letter, number or underscore is recognized as an identifier. Any sequence that starts with a 0, followed by a b, followed by a few 0s and 1s is the literal binary representation of an integer. Similar for the for the other bases and float numbers. Also similar for the other regular expressions.

The rest of the file has instructions that look like the following:

``````"int" { return RET_YY_T(T_INT); }
{ID} { return T_ID; }
``````

The first line tell the scanner what to do when it recognizes the word int. What happens is that the scanner will execute the code in front of it. The RET_YY_T is a macro that gets expanded to yylval.token = T_INT. The parser function, which is yet to be defined, will receive the T_INT token from the scanner indicating that the reserved word int was scanned.

The second line is similar, with the difference that it just identifies an identifier token and it uses a regular expression ID defined before instead of a literal value. This rule is actually incomplete, but the missing part will be added later when another part of the parser is done;

# Starting the Rift Programming Language

During my first semester of my Masters Degree I had two particular courses that I found quite interesting. One was called Advance Topics of Compilers and the other was Advance Topics of Distributed systems. The objective of the first course was the study of a more complex compiler design than the one given in a previous course of *Compilers*, while the second one was the study of distributed systems, programming for distributed systems, among other things.

After these two courses started I decided to develop a project that would be used for both courses. That project would be a programming language which would have a concurrency paradigm.

## Languages

At first I started exploring other languages that had concurrency paradigms. Among others, I explored the Go Programming Language, which ended up being one of the main sources of inspiration for this project.

Go is a language that makes it simple to use threads, known by goroutines. To start a new thread the go command is used. To see it working visit this link.

``````go say('World')
say('Hello')``````

What the code above does is to execute the say() function twice, only the first function runs in a different goroutine than the second one. When executing this function you may see Hello followed by World or vice versa.

This is such a simple way to start a new thread. Just calling a function with go before the name of the function. This was something I had to had on my language.

One other thing Go had that was very interesting was Channels. Channels are a very simple way to send some value from a goroutine to another. See this example. So to send and receive, respectively, something from a Channel you do the following:

``````c <- 10
x := <- x``````

In the code above the c variable would be a channel and the x variable would be some integer variable. The first line is an example of sending the number 10 to channel c. In another thread you can simple call the second line (assuming c would be the same channel for both threads) to receive something from the channel.

Meanwhile back at the course about distributed systems I was exploring different ways to program algorithms that used threads. Among a few technologies I used the POSIX Threads standard, or pthreads. This standard defined an API that can be used with the C programming language to create multithreading programs.

I made some experiments with pthreads for a while by implementing a few algorithms.

At this moment I started thinking how I would implement the compiler in terms out outputted code. In this project I just couldn’t output any low level code like x86 assembly because I had never worked with x86 before. And to implement a compiler that turns a high level language with multithreading features into x86 in such limited time would have been just impossible. Well, not impossible, just very hard with a high probability of failure involved, all with limited time.

So I decided to just output C code. The outputted code would be C99 using pthreads.

## Rift

Having those things I went to develop a concept for the language. That concept would be what the language would look like, what would it allow, what data types did it had, how the multithreading feature would look like, if it had classes, among many other things.

I came up with this end result.

The language also needed a name. Because this would have the notion of parallelism I started thinking about division. So I went to a synonymous dictionary and looked for synonymous of Division. The name Rift showed up. It sounded cool. And like this the development of the Rift Programming Language began!

# Starting

I’m starting this new developers blog to talk about what I’m currently exploring or projects i’m working on. Hopefully I’ll have a lot to post about in the next few weeks.