I don’t know if everyone gets the reference: RollerCoaster Tycoon is in fact writing mostly in assembly to use the hardware more efficiently
It also makes it really portable which is a big part of why all the ports to modern systems are so close to the original. Obligatory OpenRCT2 shoutout.
OpenRCT2 ditched assembly tho. They wrote it entirely in C++.
Sorry, two separate thoughts. Wasn’t saying open RCT used assembly just wanting to shout out the project.
Writing it in assembly would make it pretty much the opposite of portable (not accounting for emulation), since you are directly giving instructions to a specific hardware and OS.
Not necessarily, unless you’re working on something like an OS you’re not usually directly accessing/working on the hardware. As long as you can connect the asm up to your os/driver abstraction layer and the os to hardware apis work the game should be functional. Not to mention RCT targets the x86 assembler architecture which was one of the most popular at the time
That’s no less true than games written in C, or otherwise with few dependencies. Doom is way more portable than RCT precisely because it’s written in C instead of assembly.
you’re not usually directly accessing/working on the hardware
I mean, you are. Sure, there’s a layer of abstraction when doing tasks that require the intervention of the kernel, but you are still dealing with cpu registers and stuff like that. Merely by writing in assembly you are making your software less portable because you are writing for a specific ISA that only a certain family of processors can read, and talking with the kernel through an API or ABI that is specific to the kernel (standards like Posix mitigate the latter part somewhat, but some systems (windows) aren’t Posix compilant).
Started playing openrct2 multiplayer with a friend yesterday. Some of the best fun I’ve had.
My friend and I created MONORAIL LAND
We created the world of monorail 1. Everything exists to bring more people to monorail 1. What is monorail 1? It is a 4 car monorail that takes the shortest possible path back to the start of the station. We have several other attractions at the park such as: The Pit; Memento Mori; Install CSS, but none of them are the main attraction.
Damn this post. This is really going to f up my weekend plans.
This game ran so smooth.
If you’ve written 500k lines of code you were surely pretty confident about your decision.
You sweet summer child…
I’m a developer, I don’t just continue doing things for years if it doesn’t make sense.
(If I’m the one making the decisions)
I have seen Devs do things for many years that make no sense
programmers just not a uniform bunch. not all of them blockchain grifters. fancy that.
Like the classic, inherit a broken code base, and not being allowed by the owner to rewrite it from scratch. So you have to spend more time making each part work without the others working. Also before you are finished the customer says they have something else for you to do
That’s when you start introducing modules that have the least impact on the legacy code base. Messaging is a good place to start, but building a new code next to the existing one and slowly refactoring whenever you got time to spare is at least a bearable way to go about it.
Shhhh you just described iterative development. Careful not to be pro agile, or the developers with no social skills will start attacking you for being a scrum master in disguise!
Fuck agile, or scrum, or whatever it is called. I just look at the issues and pick whatever I feel like doing. Kanban for life.
Programmers love to rewrite things, but it’s often not a good idea, let alone good for a business. Old code can be ugly because it is covered with horrible leasons and compromises. A rewrite can be the right thing, but it’s not to be taken lightly. It needs to be budgeted for, signed off on and carefully planned. The old system needs to stable enough to continue until the new system can replace it.
Okay, I’ll tell you, in this situation, the code never really worked outside of the demo stage. It was written in bash+ansibel+terraform+puppet designed to use ssh from a docker container and run stages of the code on different servers. And some of it supposedly worked on his computer, but when it failed to run when he was not clicking the buttons, and I read through each part, I can promise you that it never worked
I didn’t write broken code base because I didn’t like the code, I meant that it didn’t work
The whole point of docker is to solve the “work on my computer” by providing the developer hacked up OS with the app. (Rather than fixing it and dealing dependencies like a grown up)
Bit special for it to still be broken. If it flat out doesn’t work, at all, then it may well be “sunk cost fallacy” to keep working on it. There is no universal answer, but there is a developer tendency to rewrite.
I’ll consede that his point in using docker was to avoid the “it works on my computer” problem. It was literally one of his talking points in his handover meeting. But that is not the problem docker is trying to solve, and not it’s strength.
Docker and similar container software makes many things very convenient, and has uses far outside it’s originally intended usage.
And in this situation, when want stable package versions, and simpler uniform setup. And you don’t have stable package versions because docker doesn’t provide reproducible builds (and he didn’t do the work work srojdn that), and it is not a simpler setup when you want to use the hosts ssh agent with ssh inside docker, which require different steps for different distros, Mac and Idk if windows would have worked? And sharing your ssh agent into the docker image is not stable either even if you set it up, it isn’t sure to work the next reboot. And can be every difficult in some Linux distros due to permissions, etc.
Then I ended up putting it on a vm, that is already used for utilities. If I were to do it today, I would probsbly use nix, to actually run these programs that is very sensitive to program version changes in a stable reproducible environment that can run on any Linux distro, including in docker
But the program had many more issues, like editing yaml files by catting them and piping them into tac and piping into sed and then into tac again… And before you say you could do that just with one sed command, sure, but the sane solution is to use yq. Let’s just say that was the tip of the iceberg
Oh and just have to note, claimed working features, but no way for that code the be executed, and when I actually tried to hook up this code, I can’t believe its ever fully worked.
I don’t know exactly how much code reuse Sawyer had going back then, but if you’ve ever played Transport Tycoon (or more likely the open source version around today, OpenTTD) then you know that the interface and graphics are extremely similar. So it’s not like he started from scratch each game with nothing but a hot spinning disc and a magnetized needle.
But yeah, the main reason to put up with all the modern framework bloat is the ever-ephemeral promise of being able to write your thing once and have it ported to run anywhere with minimal to no further effort.
Don’t Want to be that Guy but you can actually use library’s in Assembly and probably want to, as otherwise you have no good way of interacting with the os.
In fact Chris Sawyer did use C for the purposes of linking the OS libraries necessary for windowing, rendering, sound etc.
You can actually pluralize library and probably want to.
I wanna see someone make a GPU accelerated game in assembly.
Just throw the Vulkan and DX12 C APIs in the garbage and do it all yourself lol.
To be fair, assembly lines of code are fairly short.
/ducks
Back in the day we wrote everything in asm
Writing in ASM is not too bad provided that there’s no operating system getting in the way. If you’re on some old 8-bit microcomputer where you’re free to read directly from the input buffers and write directly to the screen framebuffer, or if you’re doing embedded where it’s all memory-mapped IO anyway, then great. Very easy, makes a lot of sense. For games, that era basically ended with DOS, and VGA-compatible cards that you could just write bits to and have them appear on screen.
Now, you have to display things on the screen by telling the graphics driver to do it, and so a lot of your assembly is just going to be arranging all of your data according to your platform’s C calling convention and then making syscalls, plus other tedious-but-essential requirements like making sure the stack is aligned whenever you make a jump. You might as well write macros to do that since you’ll be doing it a lot, and if you’ve written macros to do it then you might as well be using C instead, since most of C’s keywords and syntax map very closely to the ASM that would be generated by macros.
A shame - you do learn a lot by having to tell the computer exactly what you want it to do - but I couldn’t recommend it for any non-trivial task any more. Maybe a wee bit of assembly here-and-there when you’ve some very specific data alignment or timing-sensitive requirement.
I like ASM because it can be delightfully simple, but it’s just not very productive especially in light of today’s tooling. In practice, I use it only when nothing else will do, such as for operating system task schedulers or hardware control. It’s nice to have the opportunity every once in a while to work on an embedded system with no OS but not something I get the chance to do very often.
On one large ASM project I worked (an RTOS) it’s exactly as you described. You end up developing your own version of everything a C compiler could have done for you for free.
Pssh, if you haven’t coded on punch cards, you aren’t a real coder
Step 1: Begin writing in Assembly
Step 2: Write C
Step 3: Use C to write C#
Step 4: Implement Unity
Step 5: Write your game
Step 6: ???
Step 7: Profit
I just want to note that “Implement Unity” includes the substep of “use C# to generate C++”.
Step 6 extort developers
Good thing I wrote my own game engine using D, and soon there will be 2 (known) games for it.
I’m on E already
D
I was really into D, but I gave up on it because it seemed kind of dead. It’s often not mentioned in long lists of languages (i.e. I think Stack Overflow’s report did not mention it), and I think I remember once looking at a list of projects that used D and most of them were dead. I think I also remember once seeing a list of companies that used D, and when I looked up one of them I found out it didn’t exist anymore 😐️
Give it another go, maybe join the community Discord server.
There’s some very well going hobby projects, and D3 seems to be inevitable (safe by default, PhobosV3, etc.).
Eww Unity
Step 0: Invent the universe
What are we doing here? Baking a pie?
I mean, I’m pretty sure it would be a good learning experience so I would really not regret it.
I tried decades ago. Grew up learning BASIC and then C, how hard could it be? For a 12 year old with no formal teacher and only books to go off of, it turns out, very. I’ve learned a lot of coding languages on my own since, but I still can’t make heads or tales of assembly.
Try 6502 assembly. https://skilldrick.github.io/easy6502/
My favorite assembly language by far.
this page is great. starting right at “draw some pixels” in such a simple way just instantly makes it feel a bit more approachable!
If you can’t get enough 6502 you can build your own http://wilsonminesco.com/6502primer/
I built a 6502 SBC on a breadboard years ago
Assembly requires a knowledge of the cpu architecture pipeline and memory storage addressing. Those concepts are generally abstracted away in modern languages
You don’t need to know the details of the CPU architecture and pipeline, just the instruction set.
Memory addressing is barely abstracted in C, and indexing in some form of list is common in most programming languages, so I don’t think that’s too hard to learn.
You might need to learn the details of the OS. That would get more complicated.
I said modern programming languages. I do not consider C a modern language. The point still stands about abstraction in modern languages. You don’t need to understand memory allocation to code in modern languages, but the understanding will greatly benefit you.
I still contend that knowledge of the cpu pipeline is important or else your code will wind up with a bunch of code that is constantly resulting in CPU interrupts. I guess you could say you can code in assembly without knowledge of the cpu architecture, but you won’t be making any code that runs better the output code from other languages.
Sounds very similar to my own experience though there was a large amount of Pascal in between BASIC and C.
Yeah, I skipped Pascal, but it at least makes sense when you look at it. By the time my family finally jumped over to PC, C was more viable. Then in college, when I finally had to opportunity to formally learn, it was just C++ and HTML… We didn’t even get Java!
I had used like four different flavors of BASIC by the time I got a IBM compatible PC, but I ended up getting on the Borland train and ended up with Turbo Pascal, Turbo C, and Turbo ASM (and Turbo C++ that I totally bounced off of). I was in the first class at my school that learned Java in college. It was the brand new version 1.0.6! It was so rough and new, but honestly I liked it. It’s wildly different now.
Reminder that ttd was open source even before open ttd :D
Chris Sawyer is a madman.
I believe you meant to write genius.
Chris Genius is a madman.
Who the hell even is Madam Chris Genius?
Shifts bit to the left
Um what am I doing
Shifts bit to the right
program crashes
They call me the Programmer and I speak to the metal,
Now check out this app, that really shows off my mettle!
where’s your furry cracktro then??
Your game will actually likely be more efficient if written in C. The gcc compiler has become ridiculously optimized and probably knows more tricks than you do.
Yep but not if you write sloppy C code. Gotta keep those nuts and bolts tight!
If you’re writing sloppy C code your assembly code probably won’t work either
Except everyone writing C is writing sloppy C. It’s like driving a car, there’s always a non-zero chance of an accident.
Even worse, in C the compiler is just waiting for you to trip up so it can do something weird. Think the risk of UB is overblown? I found this article from Raymond Chen enlightening: https://devblogs.microsoft.com/oldnewthing/20140627-00/?p=633
I recently came across a rust book on how pointers aren’t just ints, because of UB.
fn main() { a = &1 b = &2 a++ if a == b { *a = 3 print(b) } }
This may either: not print anything, print 3 or print 2.
Depending on the compiler, since b isn’t changed at all, it might optimize the print for
print(2)
instead ofprint(b)
. Even though everyone can agree that it should either not print anything or 3, but never 2.A compiler making assumptions like that about undefined behaviour sounds just like a bug. Maybe the bug is in the spec rather than the compiler, but I can’t think of any time it would be better to optimize that code out entirely because UB is detected rather than just throwing an error or warning and otherwise ignoring the edge cases where the behaviour might break. It sounds like the worst possible option exactly for the reasons listed in that blog.
The thing about UB is that many optimizations are possible precisely because the spec specified it as UB. And the spec did so in order to make these optimizations possible.
Codebases are not 6 lines long, they are hundreds of thousands. Without optimizations like those, many CPU cycles would be lost to unnecessary code being executed.
If you write C/C++, it is because you either hate yourself or the application’s performance is important, and these optimizations are needed.
The reason rust is so impressive nowadays is that you can write high performing code without risking accidentally doing UB. And if you are going to write code that might result in UB, you have to explicitly state so with
unsafe
. But for C/C++, there’s no saving. If you want your compiler to optimize code in those languages, you are going to have loaded guns pointing at your feet all the time.
Write it in Rust, and it’ll never even leak memory.
Especially these days. Current-gen x86 architecture has all kinds of insane optimizations and special instruction sets that the Pentium I never had (e.g. SSE). You really do need a higher-level compiler at your back to make the most of it these days. And even then, there are cases where you have to resort to inline ASM or processor-specific intrinsics to optimize to the level that Roller Coaster Tycoon is/was. (original system specs)
I might be wrong, but doesn’t SSE require you to explicitly use it in C/C++? Laying out your data as arrays and specifically calling the SIMD operations on them?
There’s absolutely nothing you can do in C that you can’t also do in assembly. Because assembly is just the bunch of bits that the compiler generates.
That said, you’d have to be insane to write a game featuring SIMD instructions these days in assembly.
Technically assembly is a human-readable, paper-thin abstraction of the machine code. It really only implements one additional feature over raw machine code and that’s labels, which prevents you from having to rewrite jump and goto instructions EVERY TIME you refactor upstream code to have a different number of instructions.
So not strictly the bunch of bits. But very close to it.
I think they meant the other way around, that if you wanted to use it in C/C++, you’d have to either use assembly or some specific SSE construct otherwise the compiler wouldn’t bother.
That probably was the case at one point, but I’d be surprised if it’s still the case. Though maybe that’s part of the reason why the Intel compiler can generate faster code. But I suspect it’s more of a case of better optimization by people who have a better understanding of how it works under the hood, and maybe better utilization of newer instruction set extensions.
SSE has been around for a long time and is present in most (all?) x86 chips these days and I’d be very surprised if gcc and other popular compilers don’t use it effectively today. Some of the other extensions might be different though.
Oh I see your point. Yeah, I think they meant that. And yes, there was a time you’d have to do trickery in C to force the use of SSE or whatever extensions you wanted to use.
If you want to use instructions from an extension (for example SIMD), you either: provide 2 versions of the function, or just won’t run in some CPUs. It would be weird for someone that doesn’t know about that to compile it for x86 and then have it not run on another x86 machine. I don’t think compilers use those instructions if you don’t tell them too.
Anyway, the SIMD the compilers will do is nowhere near the amount that it’s possible. If you manually use SIMD intrinsics/inline SIMD assembly, chances are that it will be faster than what the compiler would do. Especially because you are reducing the % of CPUs your program can run on.
I want to get off Mr. Bones’ Wild Ride
The ride never ends!
I was looking for this comment. Brings back so many good memories of the early internet.