Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Camlorn

Pages: [1] 2 3 ... 6
1
Drivers / Re: Status update of fluffos 3.0
« on: October 19, 2014, 09:05:09 pm »
Well, technically, a 32-bit process is limited to at most 4 GB...so...is the driver 64-bit safe?

2
Drivers / Re: smart pointers as a change to the driver for refcounting
« on: October 15, 2014, 11:13:14 pm »
That's also true.  I switched to shared_ptr for my own projects, but retrofitting one of them (much, much smaller) took something around two weeks before everything was right.  At that point, that was something like 20% of the total development time put into it.

3
Drivers / Re: smart pointers as a change to the driver for refcounting
« on: October 15, 2014, 09:31:38 am »
I recall hearing once that Discworld takes a couple gigabytes.  This may be false, and seems excessive.  Discworld has a very, very large number of players-back when I played it, 80-100, probably more.
3 Kingdoms is another LPMud.  I am unsure which driver.  Back when I played it, 3 Kingdoms had upwards of 150 players.  3 Kingdoms rebooted once a week, or thereabouts.  This rebooting was not for system problems, it was for game balance-3 Kingdoms does not save eq, and part of gameplay is reacquiring it after boot.  3 Kingdoms never, ever lags.
Batmud is probably the largest mud I know of.  I don't recall it rebooting at all, but this may have happened.  If it did, it didn't impact the players much.  batmud, in the not-too-distant past, had upwards of 300 players at peak times.  Batmud also never lagged.
It's not worth it, basically.  No player will notice.  No mud admin will notice.  No one but the very few people who hack the driver care.  Most muds don't hack the driver.  It's a bit like saying that you're going to make your compiler 2% faster so that end users have better programs-the end user isn't going to notice.  In the case of Fluffos, it's mostly stable, so don't touch.
As for DGD: a basic language of your own takes a couple weeks.  Dgd does a bunch of stuff that's hard that isn't part of the "language."  The big selling points of DGD are that it's got atomics and it's got the ability to let your mud pretend it never shuts down.  Dgd is really a language which tries to be compatible with another nonstandard language, a system that tries to run it in an environment that makes today look like heaven, and the ability to conceptually use more ram than you actually have available.
As for my own language?  With modern constraints, it would take 2 weeks to a month to have something basic but usable going.  Grab any of the frameworks--llvm plus something like antlr comes to mind--and you're 75% done with your compiler.  Interpreting is easy enough, especially--as you said--because those micro optimizations have little to no impact.  If your goal is to experiment with this kind of thing then I suggest starting your own project.  You're talking about retrofitting a system that's worked fine since the early 90s or before-I'm not sure where Fluffos's roots began.  It feels like your goal is to learn something, but maybe I'm wrong.

4
Drivers / Re: smart pointers as a change to the driver for refcounting
« on: October 14, 2014, 09:34:46 am »
I'm going to have to agree with Quixadhal.
If this was a new project written in a modern programming style, I could see it.  There's a lot of software design things we do now that weren't done 20 years ago.  Most notably to this case, the idea that globals are bad.  Modularity has also become a big thing, which--again--wasn't considered so important when LPC was originally invented.
The Boehm collector is not perfect.  I see people go "O, yay, Boehm!  My problems are over!"  But that's not true.  It's accurate, but not 100% accurate.  Especially if you're holding on to global pointers everywhere, even if you're not using them.  The Boehm collector makes an interesting way to find leaks and might serve well as a good underlying memory manager, but it's not freeing you from this concern.
Refcounting with cycle detection is a garbage collector.  Nothing says that garbage collectors cna't give you guarantees about object destruction.  The Boehm collector doesn't-you can't clean up stuff when Boehm says something should die.  Refcounting should also be able to reduce the pause-the-world collections, too.  I'm not sure why more languages don't use it as a primary technique, given that it spreads the cost out a lot.
But what's being suggested here is to take a system that works and make it work.  it already works, so why bother?  Most muds can stay up for a week or more without reboot, a fact which isn't' true of a lot of user-facing software these days.  If you want to learn about GC, I'd suggest writing your own language-retrofitting into LPC isn't really needed, nor is it simple.

5
Drivers / Re: smart pointers as a change to the driver for refcounting
« on: October 13, 2014, 06:16:38 pm »
My point isn't so much that there's not better options-there are, especially if we're C++ now.
But is it worth it?  there are very, very large muds running on FluffOS without a problem, so I'm not sure I see the reason or justification for such an effort.  The only mud I know that would directly benefit is Lost Souls, but that's ldmud anyway.
As for GC, if you make the constraint that only one thread may construct or destruct an object at a time, you could easily do a gc_ptr that's used the same as shared_ptr except no worries about cycles.  I sorta brainstormed this earlier, and it's maybe a Saturday project.  So perhaps you're right, at least if someone wanted to retrofit.  I might do it for my own projects, which would have no problem with it-you could easily delegate freeing to a  background thread so that it doesn't lock the whole app, and I highly doubt that most apps spend their whole time creating new objects.

6
Drivers / Re: smart pointers as a change to the driver for refcounting
« on: October 13, 2014, 12:33:38 pm »
In all honesty, given modern computing power, I'm hard-pressed to think of a case wherein the tiny performance downside of refcounting matters.  I'm sure it can, but I don't see how stopping the world is any better-not unless it solves something specifically.
As for "easy to implement", well, if you think it's easy then you're a better computer scientist than I am.  Especially on top of an implementation that already exists.
A cycle detector is similar to the sweep step of a GC: it simply finds cycles.  If a holds a reference to b and b a reference to a, neither can die.  Python at least marks these specially and automatically frees them should they not have a destructor.  Now that I think about it, I'm not sure which is simpler-cycle detecter or a full GC.

7
Drivers / Re: smart pointers as a change to the driver for refcounting
« on: October 12, 2014, 04:01:41 pm »
The only way to do this at all is C++.  My understanding is that the driver is C, so you don't get destructors.  The smart pointer trick only works with destructors.  It's possible that someone converted--I've not been following the new FluffOS development really at all.  There are some tricks with C's preprocessing facilities and helper libraries, but they only work on function-local allocations.
But garbage collectors are much harder.  Would it be worth it?  Maybe.  But refcounting with a cycle detector does almost the same thing and is much, much less work from a coding perspective.  It's not like the tiny performance hit is going to matter, either.  There's stuff with GCs allowing for heap defragmentation, but again that's not an issue.  Well, with the exception of lost souls, maybe.
But if your reference counting is "good enough", you put some style guidelines in place, etc, it's not so bad.  I mean, muds aren't doing it, only driver developers.
And finally, the biggest nail in GC implementations in my experience: lack of guarantees.  It's very nice to have a guaranteed destructor call in some cases.  While not so much an issue for LPC, not having this is a pain when using an FFI in a GC language.

8
Drivers / Re: Optimizing LPC for JIT use
« on: January 25, 2014, 10:05:34 am »
I don't personally want to reimplement the driver.  If I did, I'd use python, but that's more of a choice than anything.  It's got a good debugger, a bunch of really good networking frameworks, etc.  I really kinda like twisted these days-but at that point, it's not a driver, it's a mud.
I don't think that mixing languages in your driver is a good idea, because now someone needs to be familiar with all of them to maintain it.  In your mud? Sure.  Just not in the driver.  I also have a lot against visual studio, for managing to be on the top 5 list of things that are important for programming and aren't accessible.  Can you do a JIT in C#, anyway?  Assuming it's possible, JIT to MSIL could be an interesting project, and then let .net take over.
My point about Pypy is that you'd end up with, basically, a driver in LPC.  You implement your interpreter in rpython, do something magical and get a JIT out of it, and then run a translation.  After that, you've (theoretically) got an LPC that's as fast as anything else Pypy runs.  Then, build your built-ins in LPC, and just implement some lower-level primitives in the driver.  Or at least, that's what they claim, and is how they got python working.  If  I were to take up such a project and a JIT was a requirement, I'd probably go there-they've got a few good tutorials, and make it look simple.  Of course, it can always turn out not to be as simple as my first impression.
But, if I suddenly developed the interest in driver re-implementation, I'd just go grab, say, Luajit and use it instead of LPC.  Or just write my mudlib in something modern like the aforementioned Python or Scala.  The latter can now be its own scripting language, and the former has always been.  Note: I am not a believer in sandboxing my mud's coders, and don't think builders should have coder power. Just to stop the objection I know is coming from that last bit.

9
Drivers / Re: Optimizing LPC for JIT use
« on: January 24, 2014, 02:07:12 pm »
This is interesting, and I can't say I know much about it.  One possible approach, though perhaps not the best, would be to reimplement a driver as rpython, and translate it through pypy.  The pypy people aren't aiming for python interpreter, they're aiming for jit language creation toolkit.  Downsides: reimplementing the world of built-ins (but it's not actually that big, unless I'm missing something), and needing to reimplement the core language.  Unfortunately, documentation is lacking (but this is somewhat true of llvm as well).  The tutorials I've seen make it look simple, at least for simple languages, and any new optimizations that Pypy's JIT gets are going to be given to us for free, probably without changes.
I'm not sure that this is worth it from the what-we-gain perspective.  LPC runs fast enough for everyone, and isn't going to get a significant number of new recruits because now it's a JIT.  If you're writing a mud that really, really, really cares about performance, you probably decided to start out with C.  I suppose a few muds would benefit;  those which have been around for a very long time and found performance issues creeping up on them.  Lost Souls is my example, but there's probably some others.  It's worth it from the cool project perspective, of course.  Is the point of this to actually gain something important/useful, or is it just one of those fun, cool, and really interesting projects?  I'm curious if my assessment about muds not really noticing or needing it is actually accurate-I really feel like it is.  And I'm not saying don't do it, I'm just not sure what the motivation is.

10
Drivers / Re: using sqlite3 in fluffOS 2.26.1
« on: April 07, 2013, 12:44:26 am »
I believe it mirrors the c API closely, and I believe discworld has at least a bit of code that you can look at, somewhere.
    Are you sure it's worth having?  For muds, at least the codebases that are publicly available, it seems to cause more trouble than it's worth (case and point Discworld mudlib installation).  Use with care.
    I don't have much information on this, I'm afraid.

11
Drivers / Re: Thoughts on licensing issue
« on: April 05, 2013, 07:02:12 pm »
Is Hydra in a working/available/etc state?  Last I heard it was sort-of beta, or something.

12
Drivers / Re: Thoughts on licensing issue
« on: April 04, 2013, 12:05:51 pm »
In no particular order (save as I thought of them):

If we really want to go here, it shouldn't be up to the driver.  Parallelizing code automatically is hard, probably doctorate level work, and you could go make this your college degree.  If this happens, it's headed over to Dgd in terms of complexity--that is, there will be all sorts of "this function must" cases that creep in over time.  I like Dgd, but it is definitley more complex.
    If parallel execution is something that you *really* want, and I really don't see the point--just fake it in the driver, and that's good enough for most muds--let the mudlib do it, not you.  Tell the mudlib authors that if they want it they need to handle heartbeat themselves, and maybe add in some more hooks to control it.
    I would love to see an optimizer, and automatically parallelizing code would be cool and 21st century and all that, but I don't expect these things in the near future.  They are hard to do.
    Perhaps the best approach is a new thread block, something like:
thread {
threaded code goes here
}
    Or, for explicit thread control,
handle = thread {
...
}
    In all honesty, I think that to take advantage of this would require centralizing the mudlib anyway, and moving to a diku-style design philosophy: everything is handled by centralized daemons.  No mudlibs, to my knowledge, do this.
    Http connections don't stay open.  Having thread support might let you do interesting web stuff, but it won't effect http that much.  I'm having trouble parsing your statement about http to figure out what exactly you mean.
    I would not personally integrate ftp into the mud.  It is an appealing thought, but sounds like a whole lot of trouble to get right in terms of security, outweighed by the fact that we can use the mud's security systems.
    The gains of threading may be lost by the code that determines if we can.  It's not a one-time compilation, it's an ongoing process, and objects can be replaced at any time.  In the best case, the code must examine the object to be parallelized and all direct references to it.  I am not up to date on how that works, but it seems to me--with the naive approach--that you would end up examining at least one level of references (that is, the references to the object directly).  This would then snowball out until we've either examined the whole world or determined that we can parallelize a function call.
    If this is such a big deal, don't use const, use pure.  A pure function will always return the same output for the same inputs, and will never have any side-effect.  Pure functions can be parallelized and inlined elsewhere without a problem, and determining purity is simpler.  Allow the mudlib author to mark a function as pure, and (this is important) have the driver enforce it.  A pure function is a function that contains mathamatical operations only, excluding assigning to its parameters, or that contains both math and function calls only to other pure functions.  The definition then becomes recursive, and all pure functions should be verifiably pure.  In addition, This means that the return value can also be cached.  The degree of "purity" needed, if I may be allowed the liberty of inventing some terminology, is only such that the function is read-only; read-only functions that do nothing save return a value after reading a bunch of stuff may be parallelized with other read-only functions with no extra mechanisms, and parallelized with functions that change state via the use of locks.
    Finally, you're going to be sending deterministic coding out the window.  It does matter who gets do_attack called on them first.  In close battles, whether I attack first or you attack first matters, as we might both be one attack away from death.  In a worse case, if I cast a spell that, say, applies a barrier to me that destroys weapons, and the multithreading decides to do the next round of my enemy's attack first...this will cause all sorts of trouble, and has been the main objection I've seen to multithreading when I've brought it up.  Game mechanics then gain a high degree of not really randomness and can't be easily documented in a consistent fashion.

13
Drivers / Re: Thoughts on licensing issue
« on: April 03, 2013, 04:24:13 pm »
Are the variable names there?  I thought they all just got slots internally, which leads us into investigating other things to determine which slot is which variable.  Being able to debug it is important in my opinion, but if i'm going to do that--why bother? I can do it in c/c++/any other language with a good debugger, and get a lot more out of it.  Also, compiling the driver with debugging enabled is supposed to slow things down, generally, so that might be significant (I'm not sure that allowing for the debugging of lpc would be much better, but lpc isn't optimizing anyway to my knowledge, so we don't lose much).  I'm not talking about me, I'm talking about everyone, and to do that requires knowing driver internals, to some extent anyway, and slowing down the driver in that way may matter for some muds (if we're on a testport, maybe not).

14
Drivers / Re: Thoughts on licensing issue
« on: April 03, 2013, 02:49:05 pm »
I forgot that, my bad.  The point still remains: that raises a whole bunch of questions.  Now it's functions being stopped, but that's the same problem.  Debugging the mud is changing the state of the mud in a very nondeterministic way: I imagine that this approach, for starters, could very easily screw up timing.  Consider:
I debug the room I'm standing in.
I type look--my player object calls the room's description function.
My player object now freezes, because it's called into the room.
The mud calls my player object to tell it that there's new input, probably from wherever we handle networking.
My player object is frozen, because it called into the room we're debugging.
The network handler consequently freezes because my player object shares the called-debugging-code flag thing.
    So, here's the problem.  There's no reason it shouldn't freeze the dragon of eternal killing-the-players, but it just froze all input.  Said dragon, for example, has a 3 turn warm-up time for his breath of slaying, in which we're supposed to flee, or hold up the shield of reflection, or who knows what.  But the players can't because our mud isn't processing networking.  I suppose verbs could handle this more appropriately, but it snowballs.  What if I'm standing in the adjacent room?  What if a player innocently wanders in?  The point of debugging isn't to complicate the code, so making it such that we have to check for debugging before doing anything, or check the return calls of functions that are designed to quite literally never fail--a simple query_description function that just returns a string, or get input back like null descriptions.  The approach being discussed here seems like it would cause a *lot* of problems, and no one would use it just because it's difficult to understand.  Lpmuds don't usually have testports, but the whole point of a testport is that you can, if needed, stop the entire world.  Miss the error checking in one place only, and it could very easily corrupt game state.  I would just leave it at stop the entire world, and we recommend a testport somewhere in the docs.

15
Drivers / Re: Thoughts on licensing issue
« on: April 03, 2013, 11:18:19 am »
Just have the driver handle the debugging entirely, and stop the world when the mudlib says it's time to debug.  The entire thing is, iirc, a stack machine, so it's not like it's a huge deal to go line by line if the appropriate information is available.  Not small, but not huge either.  I'd argue that, in this case, multithreading is not worth the large added complexity--what happens if something on another thread wants to change a variable in the object being debugged?  Do I progressively stop the mud as things call into what I'm debugging?  The list goes on.  I am pro-multithreading, or at least used to be, but...I'd not even consider that here.

Pages: [1] 2 3 ... 6