The debate about whether or not reality is predetermined or "random" is a red herring: the former seems self-evident to us because the very nature of logical reasoning is to say that something must have a cause, whereas the latter seems intuitive insofar that supposing in theory that our actions are predetermined changes absolutely nothing in terms of what we talk about when we talk about agency and responsibility. Nor is it suggested why some kind of randomness would change any of that: if your actions have no cause, then how could "you" have caused them?.
More importantly, however, the debate itself confuses the map for the territory.
The idea of causality has been muddled in modernity. We tend to think of causality in a sequential way, that it boils down to cause and effect. This type of causality was named efficient cause by Aristotle--one may say that someone built a house because they needed a roof over their head, but the efficient cause is the act of assembling the house that led to its existence. This doesn't, however, account for all ways in which we say that one thing follows from another: it follows from my floor being dry that nobody recently dumped a bucket of water on it. This is not cause and effect but inference.
While these may seem like apples and oranges, both are equally valid modes of entailment, in that when one is working with logical syntax, the two are indistinguishable and interchangeable. Therefore, insofar that we've formalized some slice of reality into such a syntax, the reason something is true is no longer necessarily a question of efficient cause.
One could nonetheless insist that there must be some cause prior to an effect, but where is this cause? Maybe the syntax contains such an entailment, maybe it doesn't; and maybe we'll find it if we expand this system of entailment or create an entirely new one. Without any such syntax, however, we have nothing to talk about at all: without a syntax, there is no system by which events have identities such that one can say that one follows from another.
Why do we need such a formalism to speak of entailment? Consider that Alice hurls a ball at Bob and it hurts him. Physics could tell you that this was bound to happen because she threw the ball with a certain amount of force in a certain direction and Bob was standing right there, but all of this also relied on giving Bob a stable identity. Of course, outside of some infinitesmal changes, Bob is recognizably the same person, but it's still the case that we chose to assign an identity that ignores changes irrelevant to the question being asked.
And if that's too nitpicky, then perhaps we speak of Bob taking a pill that will kill him in 10 years. By the time it happens, every molecule of his body is different, and even in terms of their form, he has aged and his brain structure and his relations with other people and so on have all changed. Whether that makes him the same person is a question of defining some kernel of his trajectory that fits the question at hand.
Unfortunately, this can lead to a bias where one starts to miss that any such system of entailment is but a kernel distilled from a river that we never step in twice, leading us to suppose that because our syntax is by definition a web of unbreakable rules, then this must be how reality works.
In the material world, there is nonetheless a real difference between efficient cause and other forms of entailment: the former allows prediction, and by extension, control. If you know what your adversary is going to do next, you can exploit their weaknesses, and if you know what certain forces of nature will do under certain conditions, you can harness it into technology.
In this sense, there does seem to exist deterministic causal mechanism. But to go back to the discussion on syntax, it's important to note that prediction happens with regards to certain formally defined attributes of something's identity but not others, so in this sense, this is not an absolute determinism but only a relative one. Nonetheless, one might ask how much this matters if hypothetically Newton's laws (accounting for relativity and assuming quantum fluctuations cancel each other out due to the law of large numbers) would dictate where every atom is at every point in time into the indefinite future. To answer this question, one must ask: according to whom, and how?
Instead of presuming the universe to be made up of atomic building blocks following basic rules of force and playing the simulation from there, one has to consider under what conditions this theory holds to begin with. The observations that establish these rules must be distilled through the creation of controlled environments which in turn cannot be applied trivially but instead through a messy process of engineering that itself creates a "controlled" environment different from but still largely akin to the controlled environment that allowed the observations to happen in the first place.
In each of these cases, one can think of this as a process of zooming in on an area until one can clearly see the inner workings. But in this case, this "zooming in" requires actually building a pocket of stability that makes this kind of causality clear. For the more technical minded, this is analogous to a topological manifold: for any given point, there exists some neighborhood by which things cleanly follow a certain logic, but this logic does not necessarily exist for any given neighborhood contained in the space, let alone the space as a whole. In a sense, a successful scientific experiment is a proof of concept for being able to build something that constructs a controlled environment in a similar but more instrumental way.
Either way, determinism is something that we construct, as it cannot be defined apart from syntax, and syntax itself can only be meaningfully created through some physically instantiated logical closure. Zoom out too much, and we're back in a much more nebulous neighborhood. To suggest that determinism exists apart from this kind of localized control is not only to make a baseless claim that there exists a view from nowhere (something that makes no sense, since observation is secondary to enactment), but also to ignore the fact that at the quantum level, any such "view" of anything is demonstrated to be an active perturbation by our fundamental inability to gain information about one part of a particle's state without losing information about another part.
Because of the need for such control in order to see a deterministic dance of matter, it follows that for you or anyone else to be predicted, that same control would have to be exercised on you; or, perhaps you make your own moves so predictable that others can control you anyhow. The more your actions are caused by factors endogenous to yourself, however, rather than outside forces, the less predictable and the less controllable you are; and insofar as nothing can predict you and there exists no view from nowhere, this gives you free will in the strongest sense of the word.
This free will is not mere randomness, however: randomness is something that exists over a set of possible events, whereas to be free is to embody a system of entailment that does not exist elsewhere. Insofar as that is the case, your actions are not merely a random selection of some "event" identified in any such system, but something qualitatively new. There may of course be attributes of them that others have a language for, such as your position in space, or something that signifies a shared social relationship with someone else, but those things will appear effectively "random" insofar that these familiar attributes are not the sole inputs into the process by which you decide what to do next. Your own unique inputs, to everyone else, will look like mere material accidents, even when in a relative sense they are the essence of your decisions. Contingency is thus the irreducibility of the essence one obeys; to be free is to act irreducibly by such.