Orthogonal ideas branch off in a completely unrelated direction. They don’t advance towards the goal, but also not against it. Feature creep is orthogonal. Orthogonal is independence and unrelatedness - if X and Y are orthogonal, changing X doesn't affect Y at all, and vice versa.

Parallel ideas are additive. They are not mutually exclusive and point to the same goal but don’t overlap with the current idea. Things can be "embarrassingly" parallel, where problems can easily be broken down into subparts that can be done independently with little coordination.

Geometric metaphors for problem-solving make sense to me. It's why we refer to a set of problems as a problem space and a set of solutions as a solution space. We have a faint intuition on the "distance" between ideas – related ideas and approaches are close together, and unrelated ones are further apart. Common phrases like "going off on a tangent."

But geometric metaphors capture more than just independence. Let's look at a more straightforward example than ideas - words. In machine learning, researchers have used word vectors to build language models. Think n-dimensional analogies. For example, man is to woman as uncle is to aunt. King is to kings as queen is to queens.  You can visualize these analogies below.

With deep word vector embeddings, we can perform composition on the analogies. King - man ≈ Queen - woman By simplifying a bit to two dimensions, you can see this graphically by just following the arrows.

Others have tried to formalize some of these techniques for higher-order thinking. For example, Simon Wardley used to break down different business strategies by drawing complex maps, which he called Wardley maps. You can see some on his blog.

Humans have difficulty with higher dimensions. You can't visualize these spaces easily. Maybe that's why geometric metaphors have been ambiguous for us to define so far. But I believe that thinking about problems and ideas this way is helpful, especially as complexity increases.  

Some definitions: