Recent Posts

Pages: [1] 2 3 ... 10
1
Drivers / Re: A official statement of FluffOS 3.0
« Last post by FallenTree on March 31, 2020, 10:23:14 pm »
Glad to see this forum is back!

Here's what's happening with FluffOS v2017 and v2019.

The driver main repository is at https://github.com/fluffos/fluffos/ , the best way to communicate is either on github issue, or on https://forum.fluffos.info , the offical website is at https://www.fluffos.info

v2019 contains everything since v2017, also comes with native UTF8 support, built-in weboscket support with xtermjs , you could get your mud web terminal enabled in minutes!

Let me know if you meet any issue trying it out.
2
General / Re: LPMuds.net is back! Again!
« Last post by saquivor on March 27, 2020, 07:57:14 pm »
Muchas gracias
3
Drivers / Re: JIT compiler revisited for Fluffos/dgd
« Last post by Dworkin on March 26, 2020, 09:54:09 am »
In August 2019, I finally finalised the JIT compiler for DGD. It is implemented as an extension module, so that it will work with both DGD and Hydra, and indeed makes use of LLVM. It does not yet work on Windows, but should work on any other platform with LLVM support; it was tested on i386, x86_64 and ppc.

The extension starts and then communicates with an external program, which decompiles LPC bytecode to LLVM IR, and then uses clang to create a native shared object which the extension can load into DGD's memory. LPC bytecode files, LLVM IR files and shared object files are all cached, so that this translation needs to happen only once.

Keeping the LPC=>LLVM decompiler separate from DGD avoids issues with possible memory leaks and crashes, though of course an error in the bytecode translation could still be fatal. While translating a LPC bytecode image to a shared object takes much longer than an inline JIT compiler would, it runs completely independently from DGD, which is never delayed. Even loading the shared object into DGD's runtime memory is handled by a separate thread.

DGD: https://github.com/dworkin/dgd
LPC extension modules: https://github.com/dworkin/lpc-ext
4
Heaven 7 Symposium / Compiling on modern Linux
« Last post by Kaylus on March 23, 2020, 02:49:36 am »
Hey everyone,

I was actually looking to get both Heaven 7 (v4 and Avatar) compiled recently; to get them working with anything past 3.3 will take a massive amount of work I think but I can't seem to get them working on any version of 3.3 - It appears like it might be related to the XERQ/ERQ utils. I've tried on AWS Linux (Redhat/Centos essentially), Ubuntu 18, and OSX - every time I get similar outcomes:

* When logging in as soon as the selection is made to login the driver segfaults but untraceably (unknown ??)
* Sometimes on v4 (depending on driver build) it crashes on the lsword.c heartbeat, but commenting that out goes back to the above.

Has anyone managed to get this running recently? I'd like to take a stab at cleaning it up but hoping I can shortcut past the segfault bit ;-)

Kaylus
5
General / LPMuds.net is back! Again!
« Last post by cratylus on March 14, 2020, 05:46:54 pm »
As some folks noticed, there was another pretty prolonged outage of the LPmuds.net forum. This was due to a very heavy travel schedule, and not having had time to fix a hosting problem.

Please let me know if I've goofed in restoring things. As always, please remember to be courteous and helpful. Please note that Adam is your day to day overlord...I just keep the lights on.
6
Drivers / Re: New fluffos repo containing the 2.28 version.
« Last post by silenus on September 21, 2019, 07:50:47 pm »
TBH I am not sure how far I will take the FluffOS code atm. A JIT would be nice but as there are no new mud developers and the community is somewhat dwindling it might simply not be worth the effort for me. I added to the issues just in case I decided to do it. There are some simple things I am trying to do with FluffOS such as upgrading the system to use C++ fully and get rid of the global variables which are more just edit changes to the code. The VM in FluffOS has a lot more opcodes than the one in DGD so doing anything with it involves a fair deal of tedious case by case coding with a lot of cases.
7
Drivers / Re: New fluffos repo containing the 2.28 version.
« Last post by Dworkin on September 21, 2019, 07:29:12 am »
Another direction you could take with FluffOS is improve the bytecode VM.

I did this for DGD as a prerequisite for JIT compilation.  Getting rid of lvalues as an explicit type gave me a speedup of 10-30%.
8
Drivers / Re: New fluffos repo containing the 2.28 version.
« Last post by silenus on September 21, 2019, 06:39:52 am »
I think the added processing power gained by JIT would probably open up new applications or make it possible to revisit the design decisions made in old style muds. Obviously there is little interest in such things now. A* path finding is one thing that did run slowly on muds servers about a decade ago. 
9
Drivers / Re: New fluffos repo containing the 2.28 version.
« Last post by Dworkin on September 21, 2019, 05:23:33 am »
STM has the problem that it appears to scale worse than a well-designed implementation using locking, and you still have most of the same design headaches as you have with locking for multi-threaded code. Hydra doesn't require or implement STM, though it uses some of the same underlying ideas.

In any case, FluffOS & DGD are already overpowered for MUDs on present-day hardware, and DGD with JIT compilation is ridiculously overpowered. The speedup offered by JIT compilation will probably not play out in the realm of MUDs.
10
Drivers / Re: New fluffos repo containing the 2.28 version.
« Last post by silenus on September 21, 2019, 12:25:26 am »
I remember from a long time ago now we talked about Hamlet's plans to introduce STM into FluffOS(which he later abandoned). I would assume the final implementation of Hydra is somewhat similar using commit and rollbacks for threads of execution inside the driver. I think Hamlet gave up because it would take a fair deal of work to get all the global variables to play nice with his ideas for the software transactional memory implementation. STM hasn't really caught on yet as a programming model (perhaps given limitations in scalability beyond 8/16 independent threads). I figure if someone could address some of these issues with new ideas perhaps it would gain more traction. 
Pages: [1] 2 3 ... 10