Return to “Dev Logs”

Post

Re: [Adam] Thursday, February 15, 2018

#106
DoctorGester wrote:
Thu Feb 22, 2018 4:56 pm
> *thing++;
> thing++;

That's not a typo and I can't imagine anyone making a typo of that kind if they are actually thinking about what they are doing. No human is perfect but don't strawman please.
Mate, if you have never made mistakes like this, then I must REALLY question if you have ever actually written anything.

Have you ever worked with loops? In any language, you are nearly guaranteed to have fucked up the loop at least the first attempt at it.
I can't tell you the number of times I have killed Goatbot from an unintentional infinite loop. And Goatbot is the MOST STABLE of all of our IRC bots, because he is built in such a way that it takes a catastrophic failure to even bother him. And infinite loops are about his only weakness.
°˖◝(ಠ‸ಠ)◜˖°
WebGL Spaceships and Trails
<Cuisinart8> apparently without the demon driving him around Silver has the intelligence of a botched lobotomy patient ~ Mar 04 2020
console.log(`What's all ${this} ${Date.now()}`);
Post

Re: [Adam] Thursday, February 15, 2018

#107
> Typos are generally defined as an event where your intention is to press a certain key, but you either miss that key or press a different one by mistake. Everyone makes those. I correct typos in Josh's, Lindsey's, and Adam's posts after they make them... and I typically miss fixing some there too. Josh has typos in his latest devlog update. It's not a question of "how good are you at typing" - it's just a question of how often they occur. They can't be eliminated.

Programming languages are much more constrained and formalized than natural languages and checking for typos in those is usually quite easy because you only have a limited set of things you can refer to in a given context if we are talking statically typed languages. Like if you have variable boI and a variable bol (yeah those are different words) of the same type you could make a typo and use one instead of the other but if you hold on to a proper naming convention the chance of a name + type collision is so incredibly small that I don't actually believe typos of this kind ever impacted anything given the described conditions. All of this doesn't apply to dynamic languages.

> Mate, if you have never made mistakes like this, then I must REALLY question if you have ever actually written anything.

Yes I have, I work as a senior software developer on a project with a codebase with over 1.4kk lines and over 50 developers. Can I have my club card now?
Post

Re: [Adam] Thursday, February 15, 2018

#110
DoctorGester wrote:
Thu Feb 22, 2018 5:05 pm
> Typos are generally defined as an event where your intention is to press a certain key, but you either miss that key or press a different one by mistake. Everyone makes those. I correct typos in Josh's, Lindsey's, and Adam's posts after they make them... and I typically miss fixing some there too. Josh has typos in his latest devlog update. It's not a question of "how good are you at typing" - it's just a question of how often they occur. They can't be eliminated.

Programming languages are much more constrained and formalized than natural languages and checking for typos in those is usually quite easy because you only have a limited set of things you can refer to in a given context if we are talking statically typed languages. Like if you have variable boI and a variable bol (yeah those are different words) of the same type you could make a typo and use one instead of the other but if you hold on to a proper naming convention the chance of a name + type collision is so incredibly small that I don't actually believe typos of this kind ever impacted anything given the described conditions. All of this doesn't apply to dynamic languages.

> Mate, if you have never made mistakes like this, then I must REALLY question if you have ever actually written anything.

Yes I have, I work as a senior software developer on a project with a codebase with over 1.4kk lines and over 50 developers. Can I have my club card now?
No, you cannot have your clubcard. You can hand in whatever credentials you wave about thou.

I must admit, i feel very sorry for the people that have to deal with your claims about "in theory! the code behaves like this, so in reality it will behave exactly like this too!"

Because there is no job where the theory is exactly like the actual reality of the job.

Sure, there are ideas and goals. But reality will ALWAYS throw a wrench or fifteen cows in the machine.
Post

Re: [Adam] Thursday, February 15, 2018

#111
DoctorGester wrote:
Thu Feb 22, 2018 5:08 pm
English is not my native language and yeah, you are on point, I'm not that good at it. Thanks, mr Stannis the mannis.

Image
°˖◝(ಠ‸ಠ)◜˖°
WebGL Spaceships and Trails
<Cuisinart8> apparently without the demon driving him around Silver has the intelligence of a botched lobotomy patient ~ Mar 04 2020
console.log(`What's all ${this} ${Date.now()}`);
Post

Re: [Adam] Thursday, February 15, 2018

#112
outlander wrote:
Thu Feb 22, 2018 3:11 pm
it's not the tools that make your performance good, but the ability to make wonders with the limited set of tools you were given, and your capacity to find elegant solutions to stupid limitations imposed upon you.
I'm not sure I follow the line of reasoning here. Sure, a good programmer will do good things with bad tools. But he/she will do great things with good tools.

outlander wrote:
Thu Feb 22, 2018 3:11 pm
In the rest of the world, managers decide which set of tools you have, and which set of limitations you need to overcome - and those people are not even coders most of the time, but bean counters and ticket-punchers.
This would likely fall under the category of fundamental things that need to change.

outlander wrote:
Thu Feb 22, 2018 3:11 pm
If you disagree, try writing specialised software (military, security, industrial) and you'll find out that you'll be hampered not by the programming tools, but by your lack of knowledge of the processes involved.
Eh, I've worked on training simulations for oil refineries, military training, and HPC physics stuff. Sure, it's not like I've been knee deep in that stuff for 2 decades, and yes getting up to speed on some of their processes takes time, but looking back I can't honestly say that was the largest time sink. And it certainly didn't dominate enough to say tooling improvements aren't a worthwhile investment.

Cornflakes_91 wrote:
Thu Feb 22, 2018 3:13 pm
yes, in a team of 20+ with unmanaged pointers and one changes a detail about how he handles his pointer access and suddenly random errors pop up all over the software and nobody knows why.
There are at least 2 major issues in that scenario. 1) Using unmanaged pointers in a system where the implementation is likely to affect the users. 2) Changing that implementation without changing the usage sites and just walking away.

This line of reasoning is looking at the problem from the perspective that "people can do bad things if you give them the ability to. Therefore don't give them the ability to do bad things."

And my response is that I don't believe that's a reasonable solution. You implement safety where the cost isn't too high, then you address the rest with training. I think there are cases where raw memory access is a useful, desirable tool. I'm generally against solutions of the style "you can never do this thing because you might screw it up". Prefer solutions of the form "hey, this is a pretty big foot gun and we don't encourage it so we have these other tools you can use when it's not super important, but the functionality is there if/when you need it"
Cornflakes_91 wrote:
Thu Feb 22, 2018 3:13 pm
also, my comic form answer on the whole "real programmers use raw memory exclusively!" topic
No one has said anything close to this.

Dinosawer wrote:
Thu Feb 22, 2018 3:27 pm
Pardon me being blunt, but have you actually done some programming in a decent sized group on a very large commercial long-term project in a non-memory safe programming language?
No, I'm coming from the other direction. I've worked on years long team projects in memory safe languages where that was a large productivity cost. The grass is always greener, maybe?
Dinosawer wrote:
Thu Feb 22, 2018 3:27 pm
But no I guess we're just bad programmers for making the occasional tiny mistake in our 2800 files large, 1.7 million lines of code large C/C++ project.
I haven't seen anyone say that. And sure, maybe it's not possible to reach that scale without sacrificing some control. Or maybe it's not possible with that team, or those tools, or that particular codebase, I have no idea since I'm not in that situation.

I work in an industry where I'm probably not going to work in code bases larger than maybe 100,000 lines of code. And in my experience those type of doomsday bugs are so exceedingly rare that I'll take that over having to daily deal with things like not being able to control where my allocations go, not fully controlling the GC, and taking perf hits because of extra GC-induced indirection.

In general, this seems to have ruffled some feathers, and I apologize for that. I mean this as an honest discussion about programming because I find that genuinely interesting. I'm not claiming that I have the answer to everything or that my ideas are the correct and only ideas.

That said, I think it's worth challenging the commonly held beliefs, such as trying to force memory safety onto the programmer. If you are reading between the lines and deciding that I've said "real programmers use pointers", "real programmers don't make mistakes", or some other nonsense you're flat out incorrect.

For the sake of productive discussion, I'm going to try to clarify my position.

Problem 1: most code today is a couple of orders of magnitude slower than it should be.
Problem 2: most code today is full of bugs.
Position 1: It's worth trying to understand why this is the case and working toward solutions.

That's pretty much it. I'm not claiming programmers that make mistakes are bad, or programmers that use high level languages are bad, or any other strange extrapolation from the above.
Post

Re: [Adam] Thursday, February 15, 2018

#113
AdamByrd wrote:
Thu Feb 22, 2018 5:15 pm
outlander wrote:
Thu Feb 22, 2018 3:11 pm
it's not the tools that make your performance good, but the ability to make wonders with the limited set of tools you were given, and your capacity to find elegant solutions to stupid limitations imposed upon you.
I'm not sure I follow the line of reasoning here. Sure, a good programmer will do good things with bad tools. But he/she will do great things with good tools.

outlander wrote:
Thu Feb 22, 2018 3:11 pm
In the rest of the world, managers decide which set of tools you have, and which set of limitations you need to overcome - and those people are not even coders most of the time, but bean counters and ticket-punchers.
This would likely fall under the category of fundamental things that need to change.

outlander wrote:
Thu Feb 22, 2018 3:11 pm
If you disagree, try writing specialised software (military, security, industrial) and you'll find out that you'll be hampered not by the programming tools, but by your lack of knowledge of the processes involved.
Eh, I've worked on training simulations for oil refineries, military training, and HPC physics stuff. Sure, it's not like I've been knee deep in that stuff for 2 decades, and yes getting up to speed on some of their processes takes time, but looking back I can't honestly say that was the largest time sink. And it certainly didn't dominate enough to say tooling improvements aren't a worthwhile investment.

Cornflakes_91 wrote:
Thu Feb 22, 2018 3:13 pm
yes, in a team of 20+ with unmanaged pointers and one changes a detail about how he handles his pointer access and suddenly random errors pop up all over the software and nobody knows why.
There are at least 2 major issues in that scenario. 1) Using unmanaged pointers in a system where the implementation is likely to affect the users. 2) Changing that implementation without changing the usage sites and just walking away.

This line of reasoning is looking at the problem from the perspective that "people can do bad things if you give them the ability to. Therefore don't give them the ability to do bad things."

And my response is that I don't believe that's a reasonable solution. You implement safety where the cost isn't too high, then you address the rest with training. I think there are cases where raw memory access is a useful, desirable tool. I'm generally against solutions of the style "you can never do this thing because you might screw it up". Prefer solutions of the form "hey, this is a pretty big foot gun and we don't encourage it so we have these other tools you can use when it's not super important, but the functionality is there if/when you need it"
Cornflakes_91 wrote:
Thu Feb 22, 2018 3:13 pm
also, my comic form answer on the whole "real programmers use raw memory exclusively!" topic
No one has said anything close to this.

Dinosawer wrote:
Thu Feb 22, 2018 3:27 pm
Pardon me being blunt, but have you actually done some programming in a decent sized group on a very large commercial long-term project in a non-memory safe programming language?
No, I'm coming from the other direction. I've worked on years long team projects in memory safe languages where that was a large productivity cost. The grass is always greener, maybe?
Dinosawer wrote:
Thu Feb 22, 2018 3:27 pm
But no I guess we're just bad programmers for making the occasional tiny mistake in our 2800 files large, 1.7 million lines of code large C/C++ project.
I haven't seen anyone say that. And sure, maybe it's not possible to reach that scale without sacrificing some control. Or maybe it's not possible with that team, or those tools, or that particular codebase, I have no idea since I'm not in that situation.

I work in an industry where I'm probably not going to work in code bases larger than maybe 100,000 lines of code. And in my experience those type of doomsday bugs are so exceedingly rare that I'll take that over having to daily deal with things like not being able to control where my allocations go, not fully controlling the GC, and taking perf hits because of extra GC-induced indirection.

In general, this seems to have ruffled some feathers, and I apologize for that. I mean this as an honest discussion about programming because I find that genuinely interesting. I'm not claiming that I have the answer to everything or that my ideas are the correct and only ideas.

That said, I think it's worth challenging the commonly held beliefs, such as trying to force memory safety onto the programmer. If you are reading between the lines and deciding that I've said "real programmers use pointers", "real programmers don't make mistakes", or some other nonsense you're flat out incorrect.

For the sake of productive discussion, I'm going to try to clarify my position.

Problem 1: most code today is a couple of orders of magnitude slower than it should be.
Problem 2: most code today is full of bugs.
Position 1: It's worth trying to understand why this is the case and working toward solutions.

That's pretty much it. I'm not claiming programmers that make mistakes are bad, or programmers that use high level languages are bad, or any other strange extrapolation from the above.
Sweet, now what about that release date?
Post

Re: [Adam] Thursday, February 15, 2018

#114
Most today's code is not just slower but is totally riddled with unnecessary abstractions and complications (which may or may not correlate to it being slower)
A lot of people are trying to write "neat" and "clever" code instead of writing "straight to the point" code.
Some people are totally brainwashed by OOP, and can talk the whole day how everything just HAS to be decoupled no matter the context and how global variables are evil and how inheritance is the key and how "neatly" they could have replaced that switch case with a virtual function call.
Not thinking about the actual purpose, about converting data in format A to data in format B.
This produces complicated, verbose and error-prone code. And this thinking needs to go.
Post

Re: [Adam] Thursday, February 15, 2018

#115
Some people are totally brainwashed by OOP, and can talk the whole day how everything just HAS to be decoupled no matter the context
I think that starts to change nowerdays, at least in the game-dev community. (with architectures using an entity component system)

When I write code, I keep everything visible at the beginning. I can always make a variable private later, and type getters and setters, when the module is mature and can be encapsulated. That would probably look different when I have to work in a larger team...
Post

Re: [Adam] Thursday, February 15, 2018

#117
I usually write the same algorithms several times,
first in a very hacky and fast way to see if the raw algorithm has the required results
then using neat OOP with Objects to pass data around
then (and only if there is a performance issue), converting everything to raw datastructures and direct functioncalls.

More important here is to test any more complex module properly. Since finding an issue within that segment can be a pain later in the project.
So preparing the environment for fast iteration on that specific aspect is key.
Thats why I could not imagine developing a large project in C, sitting around minutes for the compilation to finish.
I want my code running within the next couple of seconds.

If something get too complicated and tedious, but is rather regular (like serializing and deserializing data into a compact but flexible datastream),
I write me a small scripting language, parser and code generator.
Why write code, If that computer can also write code.
Post

Re: [Adam] Thursday, February 15, 2018

#118
Damocles wrote:
Thu Feb 22, 2018 5:27 pm
I think that starts to change nowerdays, at least in the game-dev community. (with architectures using an entity component system)
From what I hear AAA has mostly moved on from it, but Unity still encourages newcomers to think in term of inheritance and dynamic dispatch, unfortunately. I'm excited to see what Mike Acton does for them.

DoctorGester wrote:
Thu Feb 22, 2018 5:36 pm
I'm personally not a big fan of an entity-component-system design. It can solve a bunch of problems but in my opinion a function is the best component already. ECS itself is a bit like multiple-inheritance except it's a "has a" instead of an "is a" relationship. I would say most games simply don't need a system like that.
I'm pretty enamored with them, honestly. If you want some wicked fast loops, I don't know of a better way to do it. I would agree that not all games need it, but 1) freeing up more overheard to do other cool things is great and 2) I'm not sure of an easier way to managed the complexity. If you want to share behavior on different entities you have to choose something: inheritance, composition (through components or mixins), function level polymorphism (a la Jai), or plain old functions + extra code.

Plain old functions get annoying if you're passing lots of data. I'd rather be able to say UpdatePathfinding(navData) than UpdatePathfinding(pos, vel, targetPos, targetVel, ...). Though with some care you could write a batch update for each entity type and factor the common behavior into functions. This definitely wins for simplicity and it's my second choice, but you loose the cache friendliness of components and you trade system level ordering (e.g. aim tracking takes place after pathfinding) for entity level ordering (e.g. capital ships update before fighters).

Function level polymorphism doesn't exist in C/C++ (wait, maybe through templates, but that's flirting with crazytown).

That pretty much leaves components or mixins.

Online Now

Users browsing this forum: No registered users and 15 guests

cron