This is a more technical post, but I am once more trying to convey the beauty hiding in a formula.
A determinant form ∆ is a map satisfying:

Let α and β be constants and x and y be variables. Then ∆(αx + βy, z, …) = α ∆(x, z, …) + β ∆(y, z, …) in every component (here it is only illustrated in the first component/argument of Δ) (“multilinearity”)

If two components i are equal, then Δ(…, i, …, i, …) = 0 + If you can grasp linearity, you will immediately conclude that Δ(…, i, …, αi, …) = 0 for any α. Now, we have a statement that Δ can take up 0, but we have no idea how Δ behaves if we increase α or x a tiny bit. How large is the value of Δ? From the properties above, we don’t know. In fact, there are many Δ satisfying (1) and (2), but it can be shown they are different only by some constant factor. The third property uniquely identifies the determinant form, that is the determinant:

Let I be the unit matrix (hence αx + βy = [1 0 0 …]ᵀ, z = [0 1 0 …]ᵀ, …). Then Δ(I) = 1.
Intuitively, it defines how much the matrix needs to change such that Δ(…) = 0 becomes Δ(…) = 1. Thus there is a unique determinant satisfying 3 properties. Be aware, that matrices describe linear functions. Hence (αx + βy, z, …) can be written as a matrix (αx + βy is one column, z is one column, …). I will prefer this now.
if A ∈ \(K^{n × n}\) where K is an field like the real numbers ℝ.
This formula is incredibly difficult to read:

\(σ \in S_n\) means that we consider some permutation σ over n elements. A permutation is a map. For example, σ(1) = 1, σ(2) = 2, σ(3) = 3, … is the identity map. This is one permutation. σ(1) = 2, σ(2) = 1, σ(3) = 3, σ(4) = 4, … is another permutation. A map such that every argument occurs only once and every mapped value occurs only once.

sign(σ) is the signature of a permutation. It returns 1 if “the number of malpositions in σ is odd” and 1 if “the number of malpositions in σ is even”. A malposition is given if i < j but σ(i) > σ(j). The identity map π has no malpositions, thus sign(π) = 1.

\(\prod_{i=1}^n a_{σ(i), i}\) considers column 1 and picks row σ(1). It considers column 2 and picks σ(2). And so on. The resulting values are multiplied.
One illustrative application of Leibniz' formula for determinants is the Rule of Sarrus (special case n=3). Examples in this article help grasping the concept. I hope, you got an intuitive understanding of Leibniz' formula for determinants this way.
Now my main question is: Does it still satisfy property (2)? I want to give an intuition that this is true. My explanations do not give a proof.
Consider a matrix A ∈ \(K^{3 × 3}\) of columns \(c_1, c_2\) and \(c_3\) with two equal columns \(c_1 = c_3 =\) [γ δ ε]ᵀ.
Because we pick one value per column, we have some permutations with \(a_{σ(1), 1} = a_{1, 1} = γ\) and some permutations with \(a_{σ(1), 1} = a_{3, 1} = γ\). In fact, some \(\prod_{i=1}^n a_{σ(i), i}\) contains factors γε and some product contains factors εγ. As we consider commutative fields like ℝ, γε = εγ. Actually, these two permutations are transpositions of each other. Transpositions are permutations σ where σ is the identity map except that two elements swap place (σ(i) = j and σ(j) = i). At this point, you need to believe me that any transposition σ satisfies sign(σ) = 1. Hence the products γε… and εγ… have opposite sign. Thus we have γε(…  …). But is the remaining expression … equal? Here it is trivial to see that … = λ because we must not pick values from row 1 and 3 and column 1 and 3. But in general, … is the product resulting from all permutations. Thus we have the same elements for γε… and the same elements for εγ…. As a result, we get some factor 0 and the determinant will be zero.
It takes quite some technical details to understand this, but maybe you can simply get an intuition, if you consider the Rule of Sarrus for A: