Is Move a Turing-complete language? - blockchain

Will Move (on the Diem platform), like Ethereum, be turing complete? I have read the white paper but have been unable to decide.
This is the white paper: https://developers.libra.org/docs/assets/papers/libra-move-a-language-with-programmable-resources.pdf

It is a Turing-complete programming language. It supports while loops and Recursion.
See the Move Basics documentation for more information.

It is either a turing complete or not a turing complete smart contract programming language.
Until now, there is no documentation mentioned this.

Yes, Move is Turing complete which means that Libra is Turing complete. Because of this, determining if a given Move program terminates cannot be decided statically. However, by ensuring that (1) every bytecode instruction has a non-zero cost, and (2) the amount of gas that any program can be started with is bounded, we get this termination property for programs almost free of cost.
Check out more details in the official blog: https://www.diem.com/en-us/blog/how-gas-works-on-libra-blockchain/

Related

Is C++ a turing complete language?

Is C++ a turing complete language?
Obviously that would be the case, but how is it proven practically?
Is there a minimally reproducible example that shows that this is the case?
Yes it is, from wikipedia Turing completeness
To show that something is Turing-complete, it is enough to show that
it can be used to simulate some Turing-complete system. For example,
an imperative language is Turing-complete if it has conditional
branching (e.g., "if" and "goto" statements, or a "branch if zero"
instruction; see one-instruction set computer) and the ability to
change an arbitrary amount of memory (e.g., the ability to maintain an
arbitrary number of data items).
Then imperative languages lists C++ as such.
I'm not an expert of computational theory but as empirical law a language is declared Turing-complete if it supports conditional branching, i.e. it must support if statements and go-to instruction. So the majority of languages out there is Turning-complete.
Ref. https://en.m.wikipedia.org/wiki/Turing_completeness

Is MapReduce Turing Complete?

I have two questions regarding the MapReduce framework and Turing Completeness:
First of all, since MapReduce isn't an actual programming language (it's more like a set of rules for processing data), does it make any sense to talk about Turing Completeness?
If it actually makes sense, is the MapReduce system Turing Complete?
Turing completeness applies to instruction sets (i.e. programming languages) but MapReduce is a programming model. So this question only makes sense if you specify the exact set of instructions available in the map and reduce phases.
My bet is that MapReduce for a given instruction set is Turing complete if and only if the instruction set is: If the instruction set is Turing complete then adding MapReduce into the mix doesn't change anything. If the instruction set is not Turing complete there is nothing in the map or reduce phase that would make it Turing complete.

Arenas where core.logic dominates [soft]

Community Wiki
I don't care about the reputation points, I just want good answers. Feel free to remark this question as community wiki.
Context
I'm been working through The Reasoned Schemer, and have found the following observations:
Logic programming is very interesting.
Logic programming is sometimes counter-intuitive
Logic programming is often "inefficient" (or at least the code I write).
It seems like in going from
Assembly -> C++, I "give up" control of writing my own machine code
C++ -> Clojure, I give up control of memory management
Clojure -> core.logic/prolog/minikanren, I lose partial control of how computations are done
Question:
Besides (1) solving logic puzzles and (2) type inference, what are the domains of problems that logic programming dominates?
Thanks!
Constraint logic programming can be really useful for solving various scheduling, resource allocation, and other nontrivial constraint satisfaction / combinatorial optimization problems. All you have is declarative: the constraints (e.g. only one aircraft can be on the runway at a time), and maybe something you want to minimize/maximize (throughput/waiting).
There are various well-known flavors of that in Prolog, including CLP(FD), which works in finite integer domain, and CLP(R), which works in real domain. At least CLP(FD) seems to be in core.logic's immediate roadmap.
I believe such Prolog-derived solutions are actively being used in air traffic control and other logistics tasks, although it's hard to get precise info what technologies exactly such mission- and life-critical companies are using under the hood.
Research in artificial intelligence, and in particular cognitive robotics and other application of logic-based knowledge representation, are areas where Prolog is used a lot for its close relation to logic theory. This relation is very useful because it basically brings theory to life. Theorems can be proven on paper, and then implemented almost trivially in prolog and executed, and the executing programs have the proven properties. This allows for programs that are "correct by construction", which is the opposite of first writing programs and then trying to prove properties about them (as is done in formal methods, using, e.g., model checking).
The semantic web is another place where logic programming plays a growing role.

Turing complete and parallel programming (true concurrency)

I often see people say if you can do X in some language you can do Y in another language which is the Turing Complete argument. So You'll have often (usually in a snide comment) "sure you can do t with y because y is also Turing complete.
I took CS theory a long time ago but I don't think this is always true because I'm not sure where Turing fits into concurrency. For example there are programming languages with the right hardware you can execute things to happen exactly at the same time but others where that is not possible.
I understand this is probably more of a hardware/driver issue than the language but I'm curious if or how does concurrency change what it is to be Turing Complete? Can be you be more than Turing Complete?
EDIT:
The original reason that I asked this question was in large part due to quantum computing. Although the accepted answer doesn't say this but quantum computing is (ostensible) a subset of turing.
This is a confusing topic for many people; you're not alone. The issue is that there are two different definitions of "possible" in play here. One definition of "possible" is how you're using it: is it possible to do concurrency, is it possible to operate a giant robot using the language, is it possible to make the computer enjoy strawberries, etc. This is the layman's definition of "possible".
Turing completeness has nothing to do with what's possible in the above sense. Certainly, concurrency isn't possible everywhere because (for at least some definition of concurrency) it needs to be the case that the language can produce code that can run on two or more different processors simultaneously. A language that can only compile to code that will run on a single processor thus would not be capable of concurrency. It could still be Turing-complete, however.
Turing completeness has to do with the kinds of mathematical functions that can be computed on a machine given enough memory and running time. For purposes of evaluating mathematical functions, a single processor can do everything multiple processors can because it can emulate multiple processors by executing one processor's logic after the other. The (unproven and unprovable, though falsifiable) statement that all mathematical functions that could be computed on any device are computable using a Turing machine is the so-called Church-Turing thesis.
A programming language is called Turing-complete if you can prove that you can emulate any Turing machine using it. Combining this with the Church-Turing thesis, this implies that the programming language is capable of evaluating every type of mathematical function that any device could evaluate, given enough time and memory. Most languages are Turing-complete because this only requires the capacity to allocate dynamic arrays and use loops and if-statements as well as some basic data operations.
Going in the other direction, since a Turing machine can be constructed to emulate any processor we currently know of and programming languages do what they do using processors, no current programming language has more expressive power than a Turing machine, either. So the computation of mathematical functions is equally possible across all common programming languages. Functions computable in one are computable in another. This says nothing about performance - concurrency is essentially a performance optimization.
Yes and no. There is no known model of computation that can do things that Turing machines can do and still be called computation, as opposed to magic¹. Hence, in that sense, there is nothing beyond Turing completeness.
On the other hand, you may be familiar with the saying that “there is no problem that cannot be solved by adding a layer of indirection”. So we might want to distinguish between models of computation that map directly to Turing machines and models of computation that require a level of indirection. “Adding a level of indirection” is not a precise mathematical concept in general, but on many specific cases you can observe the level of indirection. Often the easiest way to prove that some paradigm of computation is Turing-computable is to write an interpreter for it on a Turing machine, and that is exactly a level of indirection.
So let's have a look at what it means to model concurrency. You mention the ability to “execute things to happen exactly at the same time”. That's a specific kind of concurrency, called parallelism, and as far as concurrency goes it's a highly restricted model. The world of concurrency is a lot wilder than this. Nonetheless, parallelism already allows things that require some form of indirection when modeled on a Turing machine.
Consider the following problem: given computer programs A and B (passed on the tape of a universal Turing machine), execute them both, and return the result of either program; your program must terminate unless both A and B are non-terminating. In a purely sequential world, you can execute A and return the result; or you can execute B and return the result. But if you start by executing A, and it happens to be a non-terminating program while B does terminate, then your execution strategy does not solve the problem. And similarly, if you start by executing B, your execution strategy does not solve the problem because B might not terminate even if A does.
Given that it is undecidable whether A or B terminates, you cannot base your decision of which one to execute first on that. However, there is a very simple way to modify your Turing machine to execute the programs in parallel: put A and B on separate tapes, duplicate your automaton, and execute one step of each program until one of the two terminates. By adding this level of processing, you can solve the parallel execution problem.
Solving this problem only required a slight modification to the model (it is easy to model a dual-tape Turing machine with a single-tape machine). I nonetheless mention it because it is an important example in [lambda calculus](http://en.wikipedia.org/wiki/Lambda calculus), another important model of computation. The operation of reducing (evaluating) two lambda-terms in parallel until one of them reaches a normal form (terminates) is called Plotkin's parallel or. It is known that it is not possible to write a lambda term (a lambda calculus program) that implements parallel or. Hence lambda calculus is said to be “inherently sequential”.
The reason I mention the lambda calculus here is that most programming languages are closer to the lambda calculus than they are to programming machine. So as a programmer, insights from the lambda calculus are often more important than insights from Turing machines. The example of parallel or shows that adding concurrency to a language² can open possibilities that are not available in the original language.
It is possible to add concurrency to a sequential language through essentially the same trick as on Turing machines: execute a small piece of thread A, then a small piece of thread B, and so on. In fact, if you don't do that in your language, the operating system's kernel can typically do it for you. Strictly speaking, this provides concurrent execution of threads, but still using a single processor.
As a theoretical model, this kind of threaded execution suffers the limitation that it is deterministic. Indeed, any system that can be modeled directly on Turing machines is deterministic. When dealing with concurrent systems, it is often important to be able to write non-deterministic programs. Often the exact order in which the multiple threads of computation are interleaved is irrelevant. So two programs are equivalent if they do essentially the same computation, but in a slightly different order. You can make a model of concurrent computation out of a model of sequential computation by looking at sets of possible interleavings instead of single program runs, but that adds a level of indirection that is difficult to manage. Hence most models of concurrency bake nondeterminism into the system. When you do that, you can't run on a Turing machine any more.
¹
In this respect, thought (what happens in our brain) is still magic in the sense that we have no idea how it's done, we don't have a scientific understanding of it. Anything we know how to reproduce (not in the biological sense!) is Turing-computable.
²
Note that here, the language includes everything you can't define by yourself. In this sense, the standard library is part of “the language”.

create my own programming language [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
References Needed for Implementing an Interpreter in C/C++
How to create a language these days?
Learning to write a compiler
I know some c++, VERY good at php, pro at css html, okay at javascript. So I was thinking of how was c++ created I mean how can computer understand what codes mean? How can it read... so is it possible I can create my own language and how?
If you're interested in compiler design ("how can computer understand what codes mean"), I highly recommend Dragon Book. I used it while in college and went as far as to create programming language myself.
"Every now and then I feel a temptation to design a programming
language but then I just lie down until it goes away." — L. Peter
Deutsch
EDIT (for those who crave context):
"[L. Peter Deutsch] also wrote the PDP-1 Lisp 1.5 implementation, Basic PDP-1 LISP, 'while still in short pants' between the age of 12-15 years old."
If you want to understand how the computer understands the code, you might want to learn some assembly language. It's a much lower-level language and will give you a better feel for the kinds of simple instructions that really get executed. You should also be able to get a feel for how one implements higher level constructs like loops with only conditional jumps.
For an even lower-level understanding, you'll need to study up on electronics. Digital logic shows you how you can take electronic "gates" and implement a generic CPU that can understand the machine code generated from the assembly language code.
For really-low level stuff, you can study material science which can teach you how to actually make the gates work at an atomic level.
You sound like a resourceful person. You'll want to hunt down books and/or websites on these topics tailored to your level of understanding and that focus on what you're interested in the most. A fairly complete understanding of all of this comes with a BS degree in computer science or computer engineering, but many things are quite understandable to a motivated person in your position.
Yes it's possible to create your own language. Take a look at compiler compilers. Or the source code to some scripting languages if you dare. Some useful tools are yacc and bison and lexx.
Others have mentioned the dragon book. We used a book that I think was called "compiler theory and practice" back in my university days.
It's not necessary to learn assembler to write a language. For example, Javascript runs in something called an interpreter which is an application that executes javascript files. In thise case, the interpreter is usually built into the browser.
The easiest starting program language might be to write a simple text based calculator. i.e. taking a text file, run through it and perform the calculations. You could write that in C++ very easily.
My first language for a college project was a language defined in BNF given to us. We then had to write a parser which parsed it into a tree structure in memory and then into something called 3 address code (which is assembler like). You could quite easily turn 3 address code into either real assembler or write an interpreter for that.
Yup! It's definitely possible. Others have mentioned the Dragon Book, but there is also plenty of information online. llvm, for example, has a tutorial on implementing a programming language: http://llvm.org/docs/tutorial/
I really recommend Programming Language Pragmatics. It's a great book that takes you all the way from what a language is through how compilers work and creating your own. It's a bit more accessible than the Dragon Book and explains how things work before jumping in headfirst.
It is possible. You should learn about compilers and/or interpreters: what they are for and how they are made.
Start learning ASM and reading up on how byte code works and you might have a chance :)
If you know C -- it sounds like you do -- grab a used copy of this ancient book:
http://www.amazon.com/Craft-Take-Charge-Programming-Book-Disk/dp/0078818826
In it there's a chapter where the author creates a "C" interpreter, in C. It's not academically serious like the Dragon book would be, but I remember it being pretty simple, very practical and easy to follow, and since you're just getting started, it would be an awesome introduction to the ideas of a "grammar" for languages, and "tokenizing" a program.
It would be a perfect place for you to start. Also, at $0.01 for a used copy, cheaper than the Dragon Book. ;)
Start with creating a parser. Read up on EBNF grammars. This will answer your question about how the computer can read code. This is a very advanced topic, so don't expect too much of yourself, but have fun. Some resources I've used for this are bison, flex, and PLY.
Yes! Getting interested in compilers was my hook into professional CS (previously I'd been on a route to EE, and only formally switched sides in college), it's a great way to learn a TON about a wide range of computer science topics. You're a bit younger (I was in high school when I started fooling around with parsers and interpreters), but there's a whole lot more information at your fingertips these days.
Start small: Design the tiniest language you can possibly think of -- start with nothing more than a simple math calculator that allows variable assignment and substitution. When you get adventurous, try adding "if" or loops. Forget arcane tools like lex and yacc, try writing a simple recursive descent parser by hand, maybe convert to simple bytecodes and write an interpreter for it (avoid all the hard parts of understanding assembly for a particular machine, register allocation, etc.). You'll learn a tremendous amount just with this project.
Like others, I recommend the Dragon book (1986 edition, I don't like the new one, frankly).
I'll add that for your other projects, I recommending using C or C++, ditch PHP, not because I'm a language bigot, but just because I think that working through the difficulties in C/C++ will teach you a lot more about underlying machine architecture and compiler issues.
(Note: if you were a professional, the advice would be NOT to create a new language. That's almost never the right solution. But as a project for learning and exploration, it's fantastic.)
If you want a really general (but very well written) introduction to this topic -- computing fundamentals -- I highly recommend a book titled Code by Charles Petzold. He explains a number of topics you are interest and from there you can decide what you want to create yourself.
Check out this book,
The Elements of Computing Systems: Building a Modern Computer from First Principles it takes you step by step through several aspects of designing a computer language, a compiler, a vm, the assembler, and the computer. I think this could help you answer some of your questions.