May heaven guide my fingers. God protect my program counter. Keep me on the right path.
hex_digit(C,V) :-
char_code(C,Cn),
char_code('0',Zeron),
char_code('9',Ninen),
char_code('a',LowerAn),
char_code('A',UpperAn),
char_code('f',LowerFn),
char_code('F',UpperFn),
(Cn >= Zeron, Cn =< Ninen, V is Cn - Zeron;
Cn >= LowerAn, Cn =< LowerFn, V is Cn - LowerAn + 10;
Cn >= UpperAn, Cn =< UpperFn, V is Cn - UpperAn + 10).
Name:
Anonymous2012-12-30 6:26
>>9
Prolog isn't much more than a purely functional untyped language, and in these cases I'm not using backtracking, and I'm only using unification for simple pattern matching. So yeah, pick your favorite untyped language and you can do this using a purely functional subset of it.
Name:
Anonymous2012-12-30 6:27
Okay, you've piqued my interest. What does it suck at doing?
>>15
That can be fixed. I meant, what are the semantic and syntactic blind spots, what are the things that are painful to write?
Name:
Anonymous2012-12-30 7:00
>>16
Not necessarily. Prolog is too powerful. It's based on relations instead of functions. Rather than an expression evaluating to a single value, you enumerate the set of possible values it could obtain, andloop through them until the program's goal is satisfied. I suppose a high performance implementation is possible, but it wouldn't be easy and it is debatable if it would be worth the effort. Also you can't nest expressions. You end up getting a lot of temporary variables.
>>17
Why can't you nest expressions (assuming you really meant expressions and not just statements)? Is there a good reason for that?
Name:
Anonymous2012-12-30 7:33
>>18
I guess it's just to keep the syntax closer to relations. Rather than writing y = f(x), you'd write f(X,Y), and rather than writing y = f(g(x)), you'd write g(X,T), f(T,Y). I forgot to mention variables need to start with capital letters. I suppose in this form it's more intuitive to look at when the relations match with multiple values. And in some cases it's actually possible for prolog to give you the preimage of a value, like in f(X,3), although that generally doesn't work out.
Name:
Anonymous2012-12-30 7:46
>>19 Rather than writing y = f(x), you'd write f(X,Y), and rather than writing y = f(g(x)), you'd write g(X,T), f(T,Y).
Interesting, and it makes sense. Does it work out fine for larger structures?
And in some cases it's actually possible for prolog to give you the preimage of a value, like in f(X,3), although that generally doesn't work out.
Yeah, like SHA256(X,"..."). This brings me to an interesting question; how exactly are the possibilities resolved? It sounds to me like a very simple query could take longer than the heat death of the universe.
Name:
Anonymous2012-12-30 7:59
>>20
Yeah, in cases like that you'll typically get some error like the following:
?- 42 is X + Y.
ERROR: is/2: Arguments are not sufficiently instantiated
It could start enumerating all the pairs X,Y that add up to 42, but stuff like that probably isn't what you want. Although you could certainly define your own predicates that would do that, and if you implemented SHA256 using them then prolog would be able to reverse the operations and calculate preimages, although with all the branching possibilities, dead ends, and back tracking, it wouldn't terminate within a reasonable amount of time.
Name:
Anonymous2012-12-30 8:20
>>21
So what says which things can be compute trivially, which things are impractical to compute, and which things have too many solutions to be meaningful (if I understood your post correctly)?
The difference between the first two is probably halting-complete, but I'd like to know more about the reduction rules that are used.
Name:
Anonymous2012-12-30 8:37
>>22
As far as I've seen there is no distinction. You just set up the logical constraints as a network of relations and prolog will plug away until it eventually hits a solution to the constraints. If you program with care, the number of branches it must explore will be small and it will come to a solution quickly. Otherwise, if your solution is silly, or if the problem is just hard and you are leveraging prolog's bruteforceness, then you can wait for prolog to explore a very large number of possible solutions until it eventually arrives at one.
But there are some heuristics. I think one is to use depth limited search to avoid going down branches that are too long.
Name:
Anonymous2012-12-30 9:31
>>23
If you write your code in a purely-functional style, you should have no issues with that and the solution should be found in O(1), correct?
for (; !done_last; s++) switch (*s) {
case '\0':
done_last = true;
case '&':
/* Pointers are set up like this:
*
* equ s
* v v
* 'a' 'b' 'c' '=' 'd' 'e' 'f' '&'
* ^ ^
* k v
*
* We temporarily null *equ and *s, to
* make k and v null-terminated strings.
*/
temp = *s;
*s = '\0'; *equ = '\0';
new = make_node(k, v);
new->next = curr;
curr = new;
*s = temp; *equ = '=';
k = s + 1;
break;
case '=':
equ = s;
v = s + 1;
break;
}
return curr;
}
int main(void) {
mapNode *p = cgi_decode("abc=def&ghi=jkl");
>>24
Well, there is still the asymptotic growth of the algorithm itself. But if you are referring
to overhead introduced by prolog alone, then yes, you can write prolog that will perform
the same asymptotically as an equivalent program written in a purely functional language.
Just write all of your predicates so that they only produce a single unique value,
like a function. In some cases the prolog compiler will be able to detect when a predicate is
like this, and it can perform optimizations like tail call optimization. But it isn't always
so easy.
For instance:
God be my shepherd. Guide me through the valley of darkness. Lead me to satisfiable constraints.
In most languages higher order functions can call functions passed as parameters without much
overhead. But prolog needs to be prepared for F being a general predicate on two variables.