>>21,23-24
For optimal results, use a logistic functioneg. 0.5+9.5/((1+0.5e^(-1.5(t-10)))^(2)) to normalize the inputs. You can bound the sleep time arbitrarily this way. I'm not writing that in bash.
Name:
Anonymous2011-01-20 18:43
>>26
Still doesn't solve the race condition problem. Over here it happens with values with mutual differences as high as those in: ./sleepsort 0.02 0.03 0.01
Name:
Anonymous2011-01-20 19:47
using System;
using System.Collections.Generic;
using System.Collections.Concurrent;
using System.Linq;
using System.Threading;
namespace SleepSort
{
public class SleepSort
{
static void Main(string[] args)
{
var xs = new[] { 11, 215, 12, 1985, 12, 1203, 12, 152 };
var sorted = Sort(xs, x => x);
Console.Write("Sorted:");
foreach (var x in sorted)
Console.Write(" {0}", x);
Console.WriteLine();
Console.ReadKey(true);
}
public static IEnumerable<T> Sort<T>(IEnumerable<T> xs, Func<T, int> conv)
{
const int WEIGHT = 40000;
var ev = new EventWaitHandle(false, EventResetMode.ManualReset);
var q = new Queue<T>();
ParameterizedThreadStart f = x => { ev.WaitOne(); Thread.SpinWait(conv((T)x) * WEIGHT); lock (q) { q.Enqueue((T)x); } };
var ts = xs.Select(x => { var t = new Thread(f); t.Start(x); return t; }).ToArray();
ev.Set();
foreach (var t in ts)
t.Join();
return q;
}
}
}
It's still a bit of a stochastic sort.
>>27
Well no, in fact it makes the race condition worse. Many of the differences are going to be quite small indeed after coming out of that function.
I can think of a few enhancements:
First: tailor A and Khttp://en.wikipedia.org/wiki/Generalised_logistic_curve to the range of inputs. This adds some complexity, but it is nearly canceled by the act of spawning the sleeps and calculating Y(t) anyway.
Second: keep track of deltas upon return. When the gap is large enough relative to previous gapsFeigenbaum constant? start a new partition, reiterate the process on each. (Note: multiple partitions can be sleep-sorted at the same time.) This will distribute the sleep times more evenly among elements, however: there is still no guarantee on most platforms that any kind of sleep sort will be free of race conditions, on any set of inputs, no matter the minimum range between elements.
Third, a meta-enhancement: make best-guesses at values for the range chosen in the first enhancement for the purposes of optimal calculations. Likewise establish a good terminal condition for the second, probably when all partitions have 1 element.
If you have a huge range in input with many clusters, I think these variations are optimal over anything less sophisticated. I've no idea how often that is the case (actually... does anyone have info on this?) but it seems like it's probably common enough. Even if the clustering isn't bad, the partitioning process deals with it automatically.
Block the threads until every single one is created, also you should probably attempt to indicate to the OS that every thread doesn't need a very large stack.
>>35
Yes I was thinking about whether this algorithm could have potential use for a huge amount (eg billions) of numbers in a small range.. but i'm not sure which OS would handle billions of processes/threads... so like I said before it's essential a radix sort but in the time dimension.. the way around as others have said would be to have radix style buckets to group the numbers, then sort each bucket etc..
>>50
More like :: Int32 -> IORef -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> Maybe a -> IO () (I counted.)
To the person reading this in the future, >>54-60 are from something which we call an ``imageboard'', more specifically the imageboard called ``/g/'', this means that they enjoy making bad posts with little to no-content.
/*
* Inspired by the valiant /prog/lodtyes
* one of my personalities wrote an OMP
* implementation.
*
* Since we are currently training SCC
* (SCC Compiles C) we currently have to
* use GCC to compile it.
*/
/*
* @file sleepsort.c
* @brief sorts numbers
*
* @compile gcc sleepsort.c -fopenmp -o sleepsort
*
* @author Gerald Jay Sussman (Massachvsetts Institvte of Technology)
*
* Copyright (C) 2011 Gerald Jay Sussman and the Massachvsetts
* Institvte of Technology. All rights reserved.
*
* The new BSD License is applied to this software, see LICENSE.txt
*/
Just run an insertion sort over the output if you're worried about race conditions. This ensures that the numbers are relatively close to their ending points.
Terribly inaccurate with anything other than integers, but you can give arguments like 0.003 "sqrt(2)" 10e-3, for what it's worth.
Name:
Anonymous2011-03-31 20:49
Name:
Anonymous2011-03-31 20:58
way to ruin a good python thread
Name:
Anonymous2011-03-31 22:44
If you're sorting very large groups of numbers, unless your CPU can spawn the necessary number of threads, the accuracy will be compromised by shit like thread quantums.