# Demonstrations for the theory of proveit.linear_algebra.tensors¶

In [1]:
import proveit
from proveit import defaults, UnsatisfiedPrerequisites
from proveit import Function, ExprRange, ExprTuple, IndexedVar
from proveit import a, f, i, u, v, w, x, y, z, alpha, beta, fi, gamma, delta
from proveit.logic import Forall, Equals, NotEquals, InSet, CartExp
from proveit.numbers import Natural, Real, Complex
from proveit.numbers import one, two, three, four, five, Interval
from proveit.linear_algebra import (
VecSpaces, VecAdd, VecSum, VecZero, ScalarMult, TensorProd, TensorExp)
%begin demonstrations


### Vector space default assumptions¶

In order to apply the tensor product theorems, we need the operands to be known as vectors in vector spaces over a common field. For convenience, we may specify a default field, or we may specify a field when calling the TensorProd methods.

Let's set some defaults for convienience in our testing below.

In [2]:
R3 = CartExp(Real, three)

R3:
In [3]:
C3 = CartExp(Complex, three)

C3:
In [4]:
defaults.assumptions = [
InSet(u, R3), InSet(v, R3), InSet(w, R3),
InSet(x, R3), InSet(y, R3), InSet(z, R3),
NotEquals(x, VecZero(R3)), NotEquals(z, VecZero(R3)),
InSet(a, Real), InSet(alpha, Complex), InSet(beta, Real),
InSet(gamma, Real), InSet (delta, Real),
Forall(i, InSet(fi, R3), domain=Natural)]

defaults.assumptions:
In [5]:
summand_assumption = defaults.assumptions[-1]

summand_assumption:
In [6]:
summand_assumption.instantiate(assumptions=[summand_assumption,
InSet(i, Natural)])


### Some Example TensorProd For Testing¶

In [7]:
tensor_prod_00 = TensorProd(ScalarMult(a, x), y)

tensor_prod_00:
In [8]:
tensor_prod_000 = TensorProd(x, ScalarMult(a, y))

tensor_prod_000:
In [9]:
tensor_prod_01 = TensorProd(ScalarMult(a, x), y, z)

tensor_prod_01:
In [10]:
tensor_prod_02 = TensorProd(x, ScalarMult(a, y), z)

tensor_prod_02:
In [11]:
tensor_prod_03 = TensorProd(x, y, ScalarMult(a, z))

tensor_prod_03:
In [12]:
tensor_prod_04 = TensorProd(u, TensorProd(v, ScalarMult(a, w), x,
ScalarMult(alpha, y)))

tensor_prod_04:
In [13]:
tensor_prod_05 = TensorProd(u, ScalarMult(a, v), VecAdd(w, x, y), z)

tensor_prod_05:
In [14]:
tensor_prod_06 = TensorProd(VecAdd(x, y), z)

tensor_prod_06:
In [15]:
tensor_prod_07 = TensorProd(x, VecAdd(y, z))

tensor_prod_07:
In [16]:
tensor_prod_08 = TensorProd(u, v, VecAdd(w, x, y), z)

tensor_prod_08:
In [17]:
tensor_prod_with_sum_01 = TensorProd(x, VecSum(i, Function(f, i), domain=Interval(one, three)), z)

tensor_prod_with_sum_01:
In [18]:
tensor_prod_with_sum_02 = TensorProd(VecSum(i, Function(f, i), domain=Interval(two, four)), z)

tensor_prod_with_sum_02:
In [19]:
tensor_prod_with_sum_03 = TensorProd(x, VecSum(i, Function(f, i), domain=Interval(two, four)))

tensor_prod_with_sum_03:
In [20]:
tensor_prod_with_sum_04 = TensorProd(u, v, w, x, VecSum(i, Function(f, i), domain=Interval(one, five)), z)

tensor_prod_with_sum_04:
In [21]:
tensor_prod_inside_sum_01 = VecSum(i, TensorProd(x, Function(f, i)), domain=Interval(two, four))

tensor_prod_inside_sum_01:
In [22]:
tensor_prod_inside_sum_02 = VecSum(i, TensorProd(y, Function(f, i)), domain=Interval(two, four))

tensor_prod_inside_sum_02:
In [23]:
tensor_prod_inside_sum_03 = VecSum(i, TensorProd(z, Function(f, i)), domain=Interval(two, four))

tensor_prod_inside_sum_03:
In [24]:
tensor_prod_inside_sum_04 = VecSum(i, TensorProd(Function(f, i), x), domain=Interval(two, four))

tensor_prod_inside_sum_04:
In [25]:
tensor_prod_inside_sum_05 = VecSum(i, TensorProd(x, Function(f, i), z), domain=Interval(two, four))

tensor_prod_inside_sum_05:
In [26]:
tensor_prod_inside_sum_06 = VecSum(i, TensorProd(x, y, Function(f, i), z), domain=Interval(two, four))

tensor_prod_inside_sum_06:

Upon implementing Issue #28, or something along those lines, the following should not be necessary:

In [27]:
i_domains = [tensor_prod_with_sum_01.operands[1].domain,
tensor_prod_with_sum_03.operands[1].domain,
tensor_prod_with_sum_04.operands[-2].domain]
judgments = []
for i_domain in i_domains:
judgment = summand_assumption.instantiate(
assumptions=[summand_assumption, InSet(i, i_domain)])
judgments.append(judgment)
judgments


### TensorProd simplification¶

In [28]:
help(TensorProd.shallow_simplification)

Help on function shallow_simplification in module proveit.linear_algebra.tensors.tensor_prod:

shallow_simplification(self, *, must_evaluate=False, **defaults_config)
Returns a proven simplification equation for this TensorProd
expression assuming the operands have been simplified.

Currently deals only with:
(1) simplifying a TensorProd(x) (i.e. a TensorProd with a
single operand x) to x itself. For example,
TensorProd(x) = x.
(2) Ungrouping nested tensor products.
(3) Factoring out scalars.

Keyword arguments are accepted for temporarily changing any
of the attributes of proveit.defaults.

'shallow_simplified' returns the right-hand side of 'shallow_simplification'.
'shallow_simplify', called on an InnerExpr of a Judgment,
substitutes the right-hand side of 'shallow_simplification' for
the inner expression.



First test out unary TensorProd simplification.

In [29]:
TensorProd(x).simplification()


Our next test will involve ungrouping and pulling out scalar factors but will not work without a proper default field specified since it mixes real vectors with a complex scalar.

In [30]:
tensor_prod_04

In [31]:
try:
tensor_prod_04.simplification()
assert False, "Expecting an error"
except UnsatisfiedPrerequisites as e:
print("Expected error: %s"%e)

Expected error: Prerequisites not met while assuming (u in Real^{3}, v in Real^{3}, w in Real^{3}, x in Real^{3}, y in Real^{3}, z in Real^{3}, x != 0(Real^{3}), z != 0(Real^{3}), a in Real, alpha in Complex, beta in Real, gamma in Real, delta in Real, forall_{i in Natural} (f(i) in Real^{3})): A field for vector spaces was not specified and VecSpaces.default_field was not set.

In [32]:
VecSpaces.default_field = Real

VecSpaces.default_field:
In [33]:
from proveit import InstantiationFailure
try:
tensor_prod_04.simplification()
assert False, "Expecting an error"
except InstantiationFailure as e:
print("Expected error: %s"%e)

Expected error: Proof step failed assuming {u in Real^{3}, v in Real^{3}, w in Real^{3}, x in Real^{3}, y in Real^{3}, z in Real^{3}, x != 0(Real^{3}), z != 0(Real^{3}), a in Real, alpha in Complex, beta in Real, gamma in Real, delta in Real, forall_{i in Natural} (f(i) in Real^{3})}:
Attempting to instantiate |- forall_{K} [forall_{V in_c VecSpaces(K)} [forall_{a in K} [forall_{x in V} ((a * x) in V)]]] with {K: Real, V: Real^{3}, a: alpha, x: y}:
Unsatisfied condition: alpha in Real. For debugging purposes, this is accessible via Instantiation.unsatisfied_condition with applicable assumptions in Instantiation.condition_assumptions.

In [34]:
VecSpaces.default_field = Complex

VecSpaces.default_field:
In [35]:
defaults.assumptions

In [36]:
# tensor_prod_04.simplification()


### The TensorProd.membership_object() method¶

In [37]:
x_y_z = TensorProd(x, y, z)

x_y_z:
In [38]:
R3_R3_R3 = TensorProd(R3, R3, R3)

R3_R3_R3:
In [39]:
# So, this is interesting â€¦ generates an error
# VecSpaces.known_field(R3_R3_R3)

In [40]:
# THIS HANGS FOREVER
# this suggests it would be good to have a TensorProd.containing_vec_space() method
# but this was working before implementing the TensorProdMembership class ?!
# field=Real
# temp_vec_space = VecSpaces.known_vec_space(x_y_z, field=field)

In [41]:
# VecSpaces.known_field(temp_vec_space)

WORKING HERE

A manual construction of the instantiation process for the tensor_prod_is_in_tensor_prod_space theorem from the proveit.linear_algebra.tensors package.

In [42]:
from proveit.linear_algebra.tensors import tensor_prod_is_in_tensor_prod_space
tensor_prod_is_in_tensor_prod_space

In [43]:
temp_self = InSet(x_y_z, TensorProd(R3, R3, R3))

temp_self:
In [44]:
_a_sub = temp_self.element.operands

_a_sub:
In [45]:
_i_sub = _a_sub.num_elements()

_i_sub:
In [46]:
# This does NOT work, nor does get_field appear designed for this
_K_sub = VecSpaces.get_field(temp_self.domain)

_K_sub:
In [47]:
# THIS HANGS!

# we could try the following, which seems unpromising given that we already
# know what vec space we should be dealing with and should be able to derive
# the correct field instead of supplying it; notice that the result is not
# even satisfactory, returning Complexes instead of obvious Reals
# temp_self.element.deduce_in_vec_space(field=None)

In [48]:
# try this instead:
# does not work if we omit the automation=True
domain_is_vec_space = temp_self.domain.deduce_as_vec_space(automation=True)

domain_is_vec_space:
In [49]:
# followed by this:
_K_sub = VecSpaces.known_field(temp_self.domain)

_K_sub:
In [50]:
vec_spaces = temp_self.domain.operands

vec_spaces:
In [51]:
from proveit import a, i, K, V
tensor_prod_is_in_tensor_prod_space.instantiate(
{a: _a_sub, i: _i_sub, K: _K_sub, V: vec_spaces})

, ,  ⊢
In [52]:
InSet(x_y_z, TensorProd(R3, R3, R3)).conclude()

, ,  ⊢
In [53]:
InSet(x_y_z, TensorProd(C3, C3, C3)).conclude()

, ,  ⊢

### The TensorProd.association() and TensorProd.disassociation() methods¶

In [54]:
help(TensorProd.association)

Help on function association in module proveit.linear_algebra.tensors.tensor_prod:

association(self, start_idx, length, *, field=None, **defaults_config)
Given vector operands, or all CartExp operands, deduce that
this expression is equal to a form in which operands in the
range [start_idx, start_idx+length) are grouped together.
For example, calling
(a âŠ— b âŠ— ... âŠ— y âŠ— z).association(l, m-l+1)
would return
|- (a âŠ— b âŠ— ... âŠ— y âŠ— z) =
(a âŠ— b ... âŠ— (l âŠ— ... âŠ— m) âŠ— ... âŠ— y âŠ— z)
Or calling (R3 âŠ— R3 âŠ— R3).associate(1, 2) would return
|- (R3 âŠ— R3 âŠ— R3) = (R3 âŠ— (R3 âŠ— R3))

For this to work in the vectors case, the vector operands must
be known to be in vector spaces of a common field.  If the
field is not specified, then VecSpaces.default_field is used.
For this to work in the case of CartExp operands, all operands
must be (recursively) CartExps and each must be known to be
a vector space.

Keyword arguments are accepted for temporarily changing any
of the attributes of proveit.defaults.

'associated' returns the right-hand side of 'association'.
'associate', called on an InnerExpr of a Judgment,
substitutes the right-hand side of 'association' for
the inner expression.


In [55]:
help(TensorProd.disassociation)

Help on function disassociation in module proveit.linear_algebra.tensors.tensor_prod:

disassociation(self, idx, *, field=None, **defaults_config)
Given vector operands, or all CartExp operands, deduce that
this expression is equal to a form in which operand at index
idx is no longer grouped together.
For example, calling
(a âŠ— b âŠ— ... (l âŠ— ... âŠ— m) ... âŠ— y âŠ— z).association(l-1)
would return
|- (a âŠ— b âŠ— ... (l âŠ— ... âŠ— m) ... âŠ— y âŠ— z) =
(a âŠ— b âŠ— ... âŠ— l âŠ— ... âŠ— m âŠ— ... âŠ— y âŠ— z)
Or calling (R3 âŠ— (R3 âŠ— R3)).disassociate(1) would return
|- (R3 âŠ— (R3 âŠ— R3)) = (R3 âŠ— R3 âŠ— R3)

For this to work in the vectors case, the vector operands must
be known to be in vector spaces of a common field.  If the
field is not specified, then VecSpaces.default_field is used.
For this to work in the case of CartExp operands, all operands
must be (recursively) CartExps and each must be known to be
a vector space.

Keyword arguments are accepted for temporarily changing any
of the attributes of proveit.defaults.

'disassociated' returns the right-hand side of 'disassociation'.
'disassociate', called on an InnerExpr of a Judgment,
substitutes the right-hand side of 'disassociation' for
the inner expression.


In [56]:
# without the automation=True, gives error
# with automation=True, just HANGS
# tensor_prod_with_sum_04.association(1, 4, automation=True)

In [57]:
tensor_prod_04.disassociation(1, auto_simplify=False)

, , , , , ,  ⊢

### The TensorProd.scalar_factorization() method¶

In [58]:
help(TensorProd.scalar_factorization)

Help on function scalar_factorization in module proveit.linear_algebra.tensors.tensor_prod:

scalar_factorization(self, idx=None, *, field=None, **defaults_config)
Prove the factorization of a scalar from one of the tensor
product operands and return the original tensor product equal
to the factored version.  If idx is provided, it will specify
the (0-based) index location of the ScalarMult operand with the
multiplier to factor out.  If no idx is provided, the first
ScalarMult operand will be targeted.

For example,
TensorProd(a, ScalarMult(c, b), d).factorization(1)
returns
|- TensorProd(a, ScalarMult(c, b), d) =
c TensorProd(a, b, d)

As a prerequisite, the operands must be known to be vectors in
vector spaces over a common field which contains the scalar
multiplier being factored.  If the field is not specified,
then VecSpaces.default_field is used.

Keyword arguments are accepted for temporarily changing any
of the attributes of proveit.defaults.

'scalar_factorized' returns the right-hand side of 'scalar_factorization'.
'factor_scalar', called on an InnerExpr of a Judgment,
substitutes the right-hand side of 'scalar_factorization' for
the inner expression.


In [59]:
tensor_prod_00

In [60]:
tensor_prod_00.scalar_factorization(0, field=Real)

, ,  ⊢

By default, the first ScalarMult operand will be the target. Also, we may use the default field, VecSpaces.default_field.

In [61]:
VecSpaces.default_field = Real

VecSpaces.default_field:
In [62]:
tensor_prod_000.scalar_factorization()

, ,  ⊢
In [63]:
tensor_prod_01.scalar_factorization()

, , ,  ⊢
In [64]:
tensor_prod_02.scalar_factorization(1)

, , ,  ⊢
In [65]:
tensor_prod_03.scalar_factorization()

, , ,  ⊢
In [66]:
tensor_prod_05.scalar_factorization()

, , , , , ,  ⊢

### The TensorProd.distribution() method¶

In [67]:
help(TensorProd.distribution)

Help on function distribution in module proveit.linear_algebra.tensors.tensor_prod:

distribution(self, idx, *, field=None, **defaults_config)
Given a TensorProd operand at the (0-based) index location
'idx' that is a vector sum or summation, prove the distribution
over that TensorProd factor and return an equality to the
original TensorProd. For example, we could take the TensorProd
tens_prod = TensorProd(a, b+c, d)
and call tens_prod.distribution(1) to obtain:
|- TensorProd(a, b+c, d) =
TensorProd(a, b, d) + TensorProd(a, c, d)

Keyword arguments are accepted for temporarily changing any
of the attributes of proveit.defaults.

'distributed' returns the right-hand side of 'distribution'.
'distribute', called on an InnerExpr of a Judgment,
substitutes the right-hand side of 'distribution' for
the inner expression.


In [68]:
tensor_prod_05

In [69]:
tensor_prod_05.distribution(2)

, , , , , ,  ⊢
In [70]:
tensor_prod_06

In [71]:
tensor_prod_06.distribution(0)

, ,  ⊢
In [72]:
tensor_prod_07

In [73]:
tensor_prod_07.distribution(1)

, ,  ⊢
In [74]:
tensor_prod_08

In [75]:
tensor_prod_08.distribution(2)

, , , , ,  ⊢
In [76]:
tensor_prod_with_sum_01

In [77]:
tensor_prod_with_sum_01.distribution(1)

, ,  ⊢
In [78]:
temp_self = tensor_prod_with_sum_01

temp_self:
In [79]:
field=None
_V_sub = VecSpaces.known_vec_space(temp_self, field=field)

_V_sub:
In [80]:
# notice here that _V_sub is a TensorProd, not a VecSpace
type(_V_sub)

proveit.linear_algebra.tensors.tensor_prod.TensorProd
In [81]:
_K = VecSpaces.known_field(_V_sub)

_K:
In [82]:
idx = 1
sum_factor = temp_self.operands[idx]

sum_factor:
In [83]:
_a_sub = temp_self.operands[:idx]

_a_sub:
In [84]:
_c_sub = temp_self.operands[idx+1:]

_c_sub:
In [85]:
_i_sub = _a_sub.num_elements()

_i_sub:
In [86]:
_k_sub = _c_sub.num_elements()

_k_sub:
In [87]:
_b_sub = sum_factor.indices

_b_sub:
In [88]:
_j_sub = _b_sub.num_elements()

_j_sub:
In [89]:
from proveit import Lambda
_f_sub = Lambda(sum_factor.indices, sum_factor.summand)

_f_sub:
In [90]:
_Q_sub = Lambda(sum_factor.indices, sum_factor.condition)

_Q_sub:
In [91]:
from proveit import K, f, Q, i, j, k, V, a, b, c
from proveit.linear_algebra.tensors import tensor_prod_distribution_over_summation
impl = tensor_prod_distribution_over_summation.instantiate(
{K:_K_sub, f:_f_sub, Q:_Q_sub, i:_i_sub, j:_j_sub, k:_k_sub,
V:_V_sub, a:_a_sub, b:_b_sub, c:_c_sub},
preserve_all=True)

impl:
In [92]:
tensor_prod_with_sum_02

In [93]:
tensor_prod_with_sum_02.distribution(0)

In [94]:
tensor_prod_with_sum_03

In [95]:
tensor_prod_with_sum_03.distribution(1)

In [96]:
tensor_prod_with_sum_04

In [97]:
tensor_prod_with_sum_04.distribution(4)

, , , , ,  ⊢
In [98]:
# recall one of our TensorProd objects without a sum or summation:
tensor_prod_02

In [99]:
# we should get a meaningful error message when trying to distribute across
# a factor that is not a sum or summation:
try:
tensor_prod_02.distribution(1)
assert False, "Expecting a ValueError; should not get this far!"
except ValueError as the_error:
print("ValueError: {}".format(the_error))

ValueError: Don't know how to distribute tensor product over <class 'proveit.linear_algebra.scalar_multiplication.scalar_mult.ScalarMult'> factor


Working on details for a possible TensorProd.factoring() method below.

In [100]:
tensor_prod_inside_sum_01


That is actually equal to:
\begin{align} x \otimes f(2) + x \otimes f(3) + x \otimes f(4) &= x \otimes (f(2) + f(3) + f(4))\\ &= x \otimes \sum_{i=2}^{4} f(i) \end{align} We want to be able to call tensor_prod_inside_sum_01.factorization(idx), where idx indicates the (0-based) index of the tensor product term to leave inside the summation. Clearly any term(s) that are functions of the summation index itself cannot be pulled out, and we cannot change the order of the tensor product terms. In the simple example here, we would call tensor_prod_inside_sum_01.factorization(1), to obtain \begin{align} \vdash \sum_{i=2}^{4} \left[x \otimes f(i)\right] &= x \otimes \sum_{i=2}^{4} f(i) \end{align}

In [101]:
from proveit.linear_algebra.tensors import tensor_prod_distribution_over_summation
tensor_prod_distribution_over_summation

In [102]:
self_test = tensor_prod_inside_sum_01

self_test:
In [103]:
idx = 1

idx: 1
In [104]:
field = None
_V_sub = VecSpaces.known_vec_space(self_test, field=field)

_V_sub:
In [105]:
_K_sub = VecSpaces.known_field(_V_sub)

_K_sub:
In [106]:
_a_sub = self_test.summand.operands[:idx]

_a_sub:
In [107]:
_c_sub = self_test.summand.operands[idx+1:]

_c_sub:
In [108]:
_i_sub = _a_sub.num_elements()

_i_sub:
In [109]:
_k_sub = _c_sub.num_elements()

_k_sub:
In [110]:
_b_sub = self_test.indices

_b_sub:
In [111]:
_j_sub = _b_sub.num_elements()

_j_sub:
In [112]:
_f_sub = Lambda(self_test.indices, self_test.summand.operands[idx])

_f_sub:
In [113]:
_Q_sub = Lambda(self_test.indices, self_test.condition)

_Q_sub:
In [114]:
tensor_prod_distribution_over_summation.instance_params

In [115]:
from proveit import K, f, Q, i, j, k, V, a, b, c
impl = tensor_prod_distribution_over_summation.instantiate(
{K:_K_sub, f:_f_sub, Q:_Q_sub, i:_i_sub, j:_j_sub, k:_k_sub,
V:_V_sub, a:_a_sub, b:_b_sub, c:_c_sub}, preserve_all=True)

impl:
In [116]:
impl.derive_consequent().derive_reversed().with_wrapping_at()


The tensor_prod_factoring() method technically is a method called on a VecSum object, but it's good to briefly illustrate its use here in tensors, essentially performing the inverse operation of the TensorProd.distribution() method:

In [117]:
# The numeric argument indicates the tensor product factor
# to LEAVE inside the VecSum
tensor_prod_inside_sum_01.tensor_prod_factoring(1)

In [118]:
tensor_prod_inside_sum_03

In [119]:
defaults.assumptions

In [120]:
try:
tensor_prod_inside_sum_03.tensor_prod_factoring(1)
except Exception as the_exception:
print("Exception: {}".format(the_exception))


Testing/problems for factoring the case for tensor_prod_inside_sum_03, which is identical to tensor_prod_inside_sum_01 except now we have z instead of x and z has all the same properties as x â€¦

In [121]:
self_test = tensor_prod_inside_sum_03

self_test:
In [122]:
field = None
_V_sub = VecSpaces.known_vec_space(self_test, field=field)

_V_sub:
In [123]:
_K_sub = VecSpaces.known_field(_V_sub)

_K_sub:
In [124]:
_a_sub = self_test.summand.operands[:idx]

_a_sub:
In [125]:
_c_sub = self_test.summand.operands[idx+1:]

_c_sub:
In [126]:
_i_sub = _a_sub.num_elements()

_i_sub:
In [127]:
_k_sub = _c_sub.num_elements()

_k_sub:
In [128]:
_b_sub = self_test.indices

_b_sub:
In [129]:
_j_sub = _b_sub.num_elements()

_j_sub:
In [130]:
_f_sub = Lambda(self_test.indices, self_test.summand.operands[idx])

_f_sub:
In [131]:
_Q_sub = Lambda(self_test.indices, self_test.condition)

_Q_sub:
In [132]:
from proveit import K, f, Q, i, j, k, V, a, b, c
impl = tensor_prod_distribution_over_summation.instantiate(
{K:_K_sub, f:_f_sub, Q:_Q_sub, i:_i_sub, j:_j_sub, k:_k_sub,
V:_V_sub, a:_a_sub, b:_b_sub, c:_c_sub}, preserve_all=True)

impl:
In [133]:
defaults.assumptions

In [134]:
InSet(z, R3).proven()

True
In [135]:
InSet(VecSum(i, Function(f, i), domain=Interval(two, four)), R3).proven()

True
In [136]:
TensorProd(z, VecSum(i, Function(f, i), domain=Interval(two, four))).deduce_in_vec_space(TensorProd(R3, R3), field=Real)

In [137]:
InSet(TensorProd(z, VecSum(i, Function(f, i), domain=Interval(two, four))), TensorProd(R3, R3)).prove()

In [138]:
impl.derive_consequent()

In [139]:
impl.derive_consequent().derive_reversed().with_wrapping_at()

In [140]:
tensor_prod_inside_sum_02.tensor_prod_factoring(1)

In [141]:
tensor_prod_inside_sum_03.tensor_prod_factoring(1)

In [142]:
tensor_prod_inside_sum_04.tensor_prod_factoring(0)

In [143]:
tensor_prod_inside_sum_05.tensor_prod_factoring(1)

, ,  ⊢
In [144]:
tensor_prod_inside_sum_06.tensor_prod_factoring(2)

, , ,  ⊢
In [145]:
scalar_mult_example = ScalarMult(alpha,tensor_prod_inside_sum_06)

scalar_mult_example:
In [146]:
scalar_mult_example.inner_expr().operands[1].tensor_prod_factoring(2)

, , ,  ⊢
In [147]:
scalar_mult_example_02 = VecSum(i, TensorProd(ScalarMult(beta, x), ScalarMult(beta, y)), domain=Interval(two, four))

scalar_mult_example_02:
In [148]:
scalar_mult_example_03 = VecSum(i, ScalarMult(gamma, ScalarMult(beta, TensorProd(x, fi, y))), domain=Interval(two, four))

scalar_mult_example_03:
In [149]:
scalar_mult_example_04 = VecSum(i, ScalarMult(gamma, ScalarMult(i, TensorProd(x, y))), domain=Interval(two, four))

scalar_mult_example_04:
In [150]:
from proveit.numbers import Add
scalar_mult_example_05 = VecSum(
i,
ScalarMult(gamma, ScalarMult(i, ScalarMult(beta, ScalarMult(Add(i, one), TensorProd(x, y))))),
domain=Interval(two, four))

scalar_mult_example_05:
In [151]:
defaults.assumptions

In [152]:
from proveit.logic import InClass
from proveit.linear_algebra import VecSpaces

In [153]:
InClass(x, VecSpaces(Real))

In [154]:
defaults.assumptions + (InClass(fi, VecSpaces(Real)), InClass(x, VecSpaces(Real)))

In [155]:
# the scalar_factorization() takes out just a single scalar it finds first
# which is somewhat problematic, because it makes it difficult to then
# repeat the process because it produces a ScalarMult at the top level
scalar_mult_example_02_factored_01 = scalar_mult_example_02.summand.scalar_factorization()

scalar_mult_example_02_factored_01: , ,  ⊢
In [156]:
scalar_mult_example_02.summand.simplification()

, ,  ⊢
In [157]:
scalar_mult_example_02_factored_01

, ,  ⊢
In [158]:
# we would then have to dig into the next level
# scalar_mult_example_02_factored_01.inner_expr().rhs.operands[1].factor_scalar()

In [159]:
scalar_mult_example_02_factored_01

, ,  ⊢
In [160]:
# shallow simplify is working, pulling the scalars out front
# and eliminating nested ScalarMults
scalar_mult_example_02_factored_02 = scalar_mult_example_02_factored_01.inner_expr().rhs.operands[1].shallow_simplify()

scalar_mult_example_02_factored_02: , ,  ⊢
In [161]:
# double_scaling_reduce() seems to work now
# scalar_mult_example_02_factored_02.inner_expr().rhs.double_scaling_reduce()

In [162]:
scalar_mult_example_02.summand.shallow_simplification(
assumptions=defaults.assumptions + (InClass(fi, VecSpaces(Real)), InClass(x, VecSpaces(Real))))

, ,  ⊢
In [163]:
simplification_01 = scalar_mult_example_02.inner_expr().summand.shallow_simplification()

simplification_01: , ,  ⊢
In [164]:
scalar_mult_example_02.tensor_prod_factoring(idx=1)

, ,  ⊢
In [165]:
test_result = scalar_mult_example_02.factors_extraction()

test_result: , ,  ⊢

#### Testing the instantiation of the distribution_over_vec_sum theorem.¶

In [166]:
from proveit.linear_algebra.scalar_multiplication import distribution_over_vec_sum
distribution_over_vec_sum

In [167]:
temp_self = scalar_mult_example_02

temp_self:
In [168]:
from proveit import TransRelUpdater
expr = temp_self
eq = TransRelUpdater(expr)
eq.relation

In [169]:
expr = eq.update(expr.inner_expr().summand.shallow_simplification())
eq.relation

, ,  ⊢
In [170]:
expr

In [171]:
# this might be tricky â€” will known_vec_space return the same for every
# a_1, â€¦ a_i ?
_V_sub = VecSpaces.known_vec_space(expr.summand.scaled, field=None)

_V_sub:
In [172]:
_K_sub = VecSpaces.known_field(_V_sub)

_K_sub:
In [173]:
_b_sub = expr.indices

_b_sub:
In [174]:
_j_sub = _b_sub.num_elements()

_j_sub:
In [175]:
# from proveit import Lambda
_f_sub = Lambda(expr.indices, expr.summand.scaled)

_f_sub:
In [176]:
_Q_sub = Lambda(expr.indices, expr.condition)

_Q_sub:
In [177]:
_k_sub = expr.summand.scalar

_k_sub:
In [178]:
from proveit import V, K, a, b, f, j, k, Q
imp = distribution_over_vec_sum.instantiate(
{V: _V_sub, K: _K_sub, b: _b_sub, j: _j_sub,
f: _f_sub, Q: _Q_sub, k: _k_sub})

imp:
In [179]:
imp.derive_consequent().derive_reversed()

, ,  ⊢
In [180]:
expr = eq.update(imp.derive_consequent().derive_reversed())

expr:
In [181]:
eq.relation

, ,  ⊢
In [182]:
expr

In [183]:
expr.inner_expr().scaled.tensor_prod_factoring(1)

In [184]:
expr = eq.update(expr.inner_expr().scaled.tensor_prod_factoring(1))

expr:
In [185]:
eq.relation

, ,  ⊢
In [186]:
scalar_mult_example_03

In [187]:
# scalar_mult_example_03.factors_extraction()

In [188]:
# consider this example now, noting the factor of index var 'i'
scalar_mult_example_04

In [189]:
# this seems to work ok when called literally like this:
scalar_mult_example_04.inner_expr().summand.shallow_simplification()

, ,  ⊢
In [190]:
# but what if we try to use the TransRelUpdater?
expr = scalar_mult_example_04 # a VecSum with a ScalarMult summand
eq = TransRelUpdater(scalar_mult_example_04)
display(eq.relation)
expr = eq.update(expr.inner_expr().summand.shallow_simplification())
display(eq.relation)

, ,  ⊢
In [191]:
# this will not yet work, because the i factor causes it to
# completely fail; want to check if the ScalarMult.scalar is itself a Mult,
# then check and pull out what we can from that
# scalar_mult_example_04.factors_extraction(assumptions=defaults.assumptions + (InSet(i, Real),))


(1) If the summand is a ScalarMult, we then check to see if the ScalarMult.scalar involves the summand index.

(2) If not, factor it out! If it does, we might have to look more carefully â€¦

### WORKING HERE¶

In [192]:
scalar_mult_example

In [193]:
scalar_mult_example_01 = VecSum(i, ScalarMult(gamma, TensorProd(x, y, fi, z)), domain=Interval(two, four))

scalar_mult_example_01:

Here trying to manually re-create the VecSum.deduce_in_vec_space() for when the summand is a ScalarMult of a TensorProd

#### Figuring out the instantiation substitutions needed for the VecSum of a ScalarMult … and figuring out some extra steps to deal with the vec space issues …¶

In [194]:
# this process works for a single vector left behind in the sum
# but not when we associate and then try
expr = scalar_mult_example_01
field = None
idx = 2 # just a single idx right now
idx_beg = 1
idx_end = 2
from proveit import TransRelUpdater
eq = TransRelUpdater(expr)
eq.relation

In [195]:
# associate the chosen elements to remain
if idx_beg != idx_end:
expr = eq.update(expr.inner_expr().summand.scaled.association(
idx_beg, idx_end - idx_beg + 1))
idx = idx_beg
display(expr)
display(idx)

1

The following cell seems important, and may need to be implemented in some way in the VecSum.tensor_prod_factoring() method for certain circumstances. Need to talk with WW about this though. This VecSpaces.known_vec_space() succeeds where the one in the next cell was continuing to be problematic.

In [196]:
# Now trying process without this and just the one above
####### put in some extra steps to help with the ########
####### vector field stuff?
# This is based on analogous stuff in VecSum.deduce_in_vec_space()
# with some modifications for ScalarMult summand
self = expr
with defaults.temporary() as tmp_defaults:
tmp_defaults.assumptions = (defaults.assumptions + self.conditions.entries)
vec_space = VecSpaces.known_vec_space(self.summand, field=field) #or on scaled?
_V_sub = VecSpaces.known_vec_space(self.summand, field=field)
display(vec_space)

In [197]:
# _V_sub = VecSpaces.known_vec_space(expr, field=field)
# display(expr)
# _V_sub = VecSpaces.known_vec_space(expr.summand)
# VecSpaces.known_vec_space(expr, field=Real)

In [198]:
_K_sub = VecSpaces.known_field(_V_sub)

_K_sub:
In [199]:
_b_sub = expr.indices

_b_sub:
In [200]:
_j_sub = _b_sub.num_elements()

_j_sub:
In [201]:
_Q_sub = Lambda(expr.indices, expr.condition)

_Q_sub:
In [202]:
_a_sub = expr.summand.scaled.operands[:idx]

_a_sub:
In [203]:
_c_sub = expr.summand.scaled.operands[idx+1:]

_c_sub:
In [204]:
_s_sub = Lambda(expr.indices, expr.summand.scalar)

_s_sub:
In [205]:
_f_sub = Lambda(expr.indices, expr.summand.scaled.operands[idx] )

_f_sub:
In [206]:
_i_sub = _a_sub.num_elements()

_i_sub:
In [207]:
_k_sub = _c_sub.num_elements()

_k_sub:
In [208]:
from proveit.linear_algebra.tensors import tensor_prod_distribution_over_summation_with_scalar_mult
tensor_prod_distribution_over_summation_with_scalar_mult

In [209]:
from proveit import K, f, Q, i, j, k, V, a, b, c, s
impl = tensor_prod_distribution_over_summation_with_scalar_mult.instantiate(
{K:_K_sub, f:_f_sub, Q:_Q_sub, i:_i_sub, j:_j_sub,
k:_k_sub, V:_V_sub, a:_a_sub, b:_b_sub, c:_c_sub,
s: _s_sub})

impl: , , , ,  ⊢
In [210]:
impl.derive_consequent().derive_reversed()

, , , ,  ⊢

### Examples of tensor_prod_factoring()¶

In [211]:
scalar_mult_example_01.tensor_prod_factoring(idx=2)

, , , ,  ⊢
In [212]:
scalar_mult_example_01.tensor_prod_factoring(idx_beg=1, idx_end=2)

, , , ,  ⊢
In [213]:
scalar_mult_example_01.tensor_prod_factoring(idx_beg=1, idx_end=3)

, , , ,  ⊢
In [214]:
scalar_mult_example_01.tensor_prod_factoring(idx_beg=0, idx_end=2)

, , , ,  ⊢
In [215]:
scalar_mult_example_02.tensor_prod_factoring(idx=0)

, ,  ⊢
In [216]:
scalar_mult_example_02.tensor_prod_factoring(idx=1)

, ,  ⊢
In [217]:
# with no arguments, we factor as much as possible and reduce if possible
scalar_mult_example_02.tensor_prod_factoring()

, ,  ⊢
In [218]:
scalar_mult_example_03

In [219]:
# the tensor_prod_factoring() will also work
# if the ScalarMults are nested (at least to some degree)
scalar_mult_example_03.tensor_prod_factoring(idx=1)

, , , ,  ⊢
In [220]:
scalar_mult_example_03.inner_expr().tensor_prod_factoring(idx_beg=1, idx_end=2)

, , , ,  ⊢
In [221]:
type(scalar_mult_example_03.summand.scaled.scaled.operands.num_elements())

proveit.numbers.numerals.numeral.Numeral
In [222]:
temp_result = scalar_mult_example_03.inner_expr().factors_extraction(
assumptions = defaults.assumptions + scalar_mult_example_03.conditions.entries)

temp_result: , , , ,  ⊢

### Testing TensorProd.association() for tensor products of sets¶

In [223]:
# Some example TensorProds of CartExps
example_vec_space_01, example_vec_space_02, example_vec_space_03, example_vec_space_04, example_vec_space_05 = (
TensorProd(R3, R3, R3, R3), TensorProd(R3, TensorProd(R3, R3)), TensorProd(x, R3),
TensorProd(TensorProd(C3, TensorProd(C3, C3)), C3, C3),
TensorProd(TensorProd(R3, TensorProd(C3, y)), R3, C3))

example_vec_space_01:
example_vec_space_02:
example_vec_space_03:
example_vec_space_04:
example_vec_space_05:
In [224]:
example_vec_space_01_assoc = example_vec_space_01.association(1, 2)

example_vec_space_01_assoc:
In [225]:
# For this to work, we need to establish ahead of time that one of the
# operands is indeed a vector space
example_vec_space_04.operands[0].deduce_as_vec_space()
example_vec_space_04_assoc = example_vec_space_04.association(1, 2)

example_vec_space_04_assoc:
In [226]:
# This will not work, because we have a mixture of vectors and CartExps.
# The error message comes about because Prove-It is trying to treat R^3
# as a vector instead of a vector space
try:
example_vec_space_05_assoc = example_vec_space_05.association(1, 2)
except Exception as the_exception:
print("Exception: {}".format(the_exception))

Exception: Prerequisites not met while assuming (u in Real^{3}, v in Real^{3}, w in Real^{3}, x in Real^{3}, y in Real^{3}, z in Real^{3}, x != 0(Real^{3}), z != 0(Real^{3}), a in Real, alpha in Complex, beta in Real, gamma in Real, delta in Real, forall_{i in Natural} (f(i) in Real^{3})): Real^{3} is not known to be in a vector space over Real


### Testing TensorProd.disassociation() for tensor products of sets¶

Notice that TensorProd.disassociation() will produce a complete disassociation if auto_simplify() is allowed to proceed with no modifications, so here we first change the simplification direction 'ungroup' to False (and then reset back to True after this section).

In [227]:
TensorProd.change_simplification_directives(ungroup=False)

In [228]:
example_vec_space_01_assoc.rhs.disassociation(1)

In [229]:
example_vec_space_04_assoc.rhs.disassociation(1)

In [230]:
example_vec_space_04_assoc.rhs.disassociation(0)

In [231]:
example_vec_space_04_assoc.rhs.inner_expr().operands[0].disassociation(1)

In [232]:
TensorProd(TensorProd(x, TensorProd(y, z)), TensorProd(x, y)).disassociation(1)

, ,  ⊢
In [233]:
# Reset to allow default auto_simplify() results
# (and notice what then happens in the next cell)
TensorProd.change_simplification_directives(ungroup=True)

In [234]:
# Now the disassociation() produces a complete disassociation
# despite our intention to only disassociate a piece
TensorProd(TensorProd(x, TensorProd(y, z)), TensorProd(x, y)).disassociation(1)

, ,  ⊢

### Examples and Testing of VecSum.factors_extraction()¶

In [235]:
# here the collection of scalars includes an index-dependent factor
scalar_mult_example_04.factors_extraction()

, ,  ⊢
In [236]:
defaults.assumptions

In [237]:
# sometimes this works, and sometimes it doesn't
# how can it be non-deterministic? Is there something non-deterministic about the
# vector_spaces derivation?
# Here, all the scalar factors can be extracted:
display(scalar_mult_example_03)
scalar_mult_example_03.factors_extraction(assumptions = defaults.assumptions + scalar_mult_example_03.conditions.entries)

, , , ,  ⊢
In [238]:
# In this case, one factor can be extracted, another cannot:
display(scalar_mult_example_04)
scalar_mult_example_04.factors_extraction()

, ,  ⊢
In [239]:
# None of the factors are index-dependent
scalar_mult_example_02.factors_extraction()

, ,  ⊢
In [240]:
# Just one vector of several is index-dependent
# while none of the scalars are index-dependent
display(scalar_mult_example_03)
scalar_mult_example_03.factors_extraction()

, , , ,  ⊢
In [241]:
# If the scalar in ScalarMult is a single item?
display(scalar_mult_example_01)
scalar_mult_example_01.factors_extraction()

, , , ,  ⊢
In [242]:
# If some scalars are index-dependent
# while none of the vectors themselves are?
display(scalar_mult_example_04)
scalar_mult_example_04.factors_extraction()

, ,  ⊢
In [243]:
# multiple factors that can and cannot be extracted
display(scalar_mult_example_05)
scalar_mult_example_05.factors_extraction(field=None)

, , ,  ⊢

### Tues 12/21 – Wed 12/22: Testing the Add.factorization() method¶

In [244]:
from proveit.numbers import Mult

In [245]:
# not surprisingly, the Add.factorization() method doesn't work
# if the desired factor is buried too deeply in one of the expressions
try:
except ValueError as the_exception:
print("ValueError: {}".format(the_exception))

ValueError: beta not found as a direct factor of alpha * (gamma * beta)

In [246]:
add_02 = Add(Add(Mult(alpha, beta), beta), Mult(beta, gamma))

In [247]:
add_02.inner_expr().operands[0].factorization(beta)


### Testing VecAdd.factorization() method¶

In [248]:
vec_add_example_01 = VecAdd(ScalarMult(Mult(alpha, beta), x), ScalarMult(Mult(gamma, alpha), y))

In [249]:
defaults.assumptions

In [250]:
ScalarMult(alpha, ScalarMult(beta, x)).shallow_simplification()

, ,  ⊢
In [251]:
vec_add_example_01

In [252]:
vec_add_example_01.factorization(alpha, field=Complex)

, , , ,  ⊢
In [ ]:



### The TensorProd.remove_vec_on_both_sides_of_equals() and TensorProd.insert_vec_on_both_sides_of_equals() methods¶

In [253]:
help(TensorProd.remove_vec_on_both_sides_of_equals)

Help on function remove_vec_on_both_sides_of_equals in module proveit.linear_algebra.tensors.tensor_prod:

remove_vec_on_both_sides_of_equals(tensor_equality, idx, rhs_idx=None, *, field=None, **defaults_config)
From an equality with tensor products of vectors on
both sides, derive a similar equality but with the vector
operand removed at the particular given zero-based index (idx).
A different index may be specified for the right side as the
left side by setting rhs_idx (i.e., if entries don't line up
due to differences of ExprRange entries), but the default will
be to use the same.

Keyword arguments are accepted for temporarily changing any
of the attributes of proveit.defaults.


In [254]:
help(TensorProd.insert_vec_on_both_sides_of_equals)

Help on function insert_vec_on_both_sides_of_equals in module proveit.linear_algebra.tensors.tensor_prod:

insert_vec_on_both_sides_of_equals(tensor_equality, idx, vec, rhs_idx=None, *, field=None, **defaults_config)
From an equality with tensor products of vectors on
both sides, derive a similar equality but with a vector
operand inserted at the particular given zero-based index (idx).
A different index may be specified for the right side as the
left side by setting rhs_idx (i.e., if entries don't line up
due to differences of ExprRange entries), but the default will
be to use the same.

Keyword arguments are accepted for temporarily changing any
of the attributes of proveit.defaults.


In [255]:
tp_01, tp_02 = (TensorProd(x, y, z), TensorProd(x, u, z))

tp_01:
tp_02:
In [256]:
equality_01 = Equals(tp_01, tp_02)

equality_01:
In [257]:
defaults.assumptions += (equality_01,)


We can access TensorProd.remove_vec_on_both_sides_of_equals() and TensorProd.insert_vec_on_both_sides_of_equals() via Equals.remove_vec_on_both_sides and Equals.insert_vec_on_both_sides respectively which are methods generated on-the-fly by finding methods with the _on_both_sides_of_equals suffix associated with the Equals expression. The docstrings are set to be the same:

In [258]:
help(equality_01.remove_vec_on_both_sides)

Help on function remove_vec_on_both_sides_of_equals in module proveit.linear_algebra.tensors.tensor_prod:

remove_vec_on_both_sides_of_equals(tensor_equality, idx, rhs_idx=None, *, field=None, **defaults_config)
From an equality with tensor products of vectors on
both sides, derive a similar equality but with the vector
operand removed at the particular given zero-based index (idx).
A different index may be specified for the right side as the
left side by setting rhs_idx (i.e., if entries don't line up
due to differences of ExprRange entries), but the default will
be to use the same.

Keyword arguments are accepted for temporarily changing any
of the attributes of proveit.defaults.


In [259]:
equality_02 = equality_01.remove_vec_on_both_sides(0)

equality_02: , , , , ,  ⊢
In [260]:
equality_03 = equality_02.remove_vec_on_both_sides(1)

equality_03: , , , , , ,  ⊢

The example above also demonstrates the auto-simplification of unary tensor products.

Now let's insert vectors into the tensor products. Starting with the special case where we don't start out with a tensor product, we must call the TensorProd.insert_vec_on_both_sides_of_equals method directly:

In [261]:
equality_04 = TensorProd.insert_vec_on_both_sides_of_equals(
equality_03, 0, z)

equality_04: , , , , , ,  ⊢

In the furture, we could have CartExp and other vector space expression classes implement left_tensor_prod_both_sides_of_equals (and right_tensor_prod_both_sides_of_equals) so then the above would be implemented via equality.left_tensor_prod_both_sides(z) insead.

Now that we have tensor products on both sides, we can call insert_vec_on_both_sides in the equality.

In [262]:
equality_05 = equality_04.insert_vec_on_both_sides(2, x)

equality_05: , , , , , ,  ⊢

Operating on an ExprTuple whose single entry is an ExprRange that represents a finite list of elements, the ExprTuple.range_expansion() method converts self to an ExprTuple with a finite listing of explicit arguments.

For example, letting $\text{expr_tuple_01} = (x_1,\ldots,x_3)$ and then calling expr_tuple_01.range_expansion() deduces and returns

$\vdash ((x_1,\ldots,x_3) = (x_1, x_2, x_3))$

The reason for including this here in the demonstrations notebook for the linear_algebra package is that the method can be utilized to convert something like

$x_1 \otimes \ldots \otimes x_3$

to the more explicit

$x_1 \otimes x_2 \otimes x_3$

In [263]:
# create an example ExprRange
example_expr_range = ExprRange(i, IndexedVar(x, i), one, three)

example_expr_range:
In [264]:
# use the example ExprRange as the arg(s) for a TensorProd:
example_tensor_prod_over_range = TensorProd(example_expr_range)

example_tensor_prod_over_range:
In [265]:
# find the ExprTuple equivalent to the (ExprTuple-wrapped) ExprRange:
expr_range_eq_expr_tuple = ExprTuple(example_expr_range).range_expansion()

expr_range_eq_expr_tuple:
In [266]:
# find the equivalent explicit tensor prod
example_tensor_prod_over_range.inner_expr().operands.substitution(
expr_range_eq_expr_tuple)

In [267]:
# find the equivalent explicit tensor prod,
# and omit the wrapping inherited from the underlying theorem
example_tensor_prod_over_range.inner_expr().operands.substitution(
expr_range_eq_expr_tuple).with_wrapping_at()

In [268]:
%end demonstrations