Short answer:
- For the Cartesian power Cn(G), the chromatic number is trivial: χ(Cn(G)) = χ(G) for all n (Sabidussi, 1957/58). For certain G (e.g., complete graphs Kq and K2 in particular), α(Cn(G)) also has a closed form and can be computed in time polynomial in n.
- For the strong power Sn(G), computing α(Sn(G)) is exactly the “finite-n” version of the Shannon-capacity problem. Even for very small graphs this is notoriously hard; there is no general efficient algorithm known, nor are there proven exponential-time lower bounds in n. The best general tools are bounds (e.g., Lovász’s theta, Haemers’ bound), not exact computation.
- For the tensor (categorical) power Tn(G), χ(Tn(G)) is complicated in general (Hedetniemi’s conjecture is false). For some special G it is easy; in general, no efficient algorithm is known, and no exponential lower bounds in n are known either.
More detail and concrete facts:
- Cartesian powers
- Chromatic number: χ(G ☐ H) = max{χ(G), χ(H)} (Sabidussi), hence χ(Cn(G)) = χ(G).
- Independence number:
- For G = Kq (the q-ary Hamming graph with distance 1 edges), α(Cn(Kq)) = q^{n−1}. In particular, for G = K2 (the n-cube), α(Qn) = 2^{n−1}. These are classical in coding theory (maximum size of a code with minimum Hamming distance ≥ 2).
- For general G, there is no general closed form. Exact computation can be difficult, but there is no universal “must be exponential in n” lower bound known; for many fixed G there are transfer-matrix or spectral methods that yield algorithms whose complexity may be exponential in parameters of G but only polynomial in n.
- Strong powers
- Independence number: α(Sn(G)) = α(G ⊠ … ⊠ G) is the core of zero-error information theory. The asymptotic growth rate lim_{n→∞} α(Sn(G))^{1/n} is the Shannon capacity Θ(G). Even for small graphs G (e.g., the 5-cycle C5) the exact α(Sn(G)) for all n is highly nontrivial; the best general approach is to sandwich it between multiplicative upper bounds such as Lovász’s theta: α(Sn(G)) ≤ θ(G)^n, and trivial lower bound α(Sn(G)) ≥ α(G)^n.
- Complexity: With G as part of the input, exact computation is NP-hard already at n = 1 (it’s just α(G)). If you fix G and let n vary, there is no general polynomial-time algorithm known for exact α(Sn(G)); nor are there proven lower bounds that it “requires exponential time in n.” This is a long-standing open area rather than a settled complexity classification.
- Tensor (categorical) powers
- Chromatic number: χ(G × H) ≤ min{χ(G), χ(H)} (there are homomorphisms to each factor). Hedetniemi’s conjecture (which claimed equality) is false in general (Shitov, 2019). For many natural classes (e.g., complete graphs, bipartite graphs, and several other families), χ(Tn(G)) = χ(G) for all n; for general graphs, the sequence χ(Tn(G)) can drop with n and the exact value is in general hard to determine. No general polynomial-time algorithm is known, and no exponential-in-n time lower bounds are known.
- Independence number:
- For G = Kq, α(Tn(Kq)) = q^{n−1}. This is the classical “intersecting family” extremal result in Hamming space: an independent set in Kq^{× n} is a family of q-ary words of length n with the property that any two agree in at least one coordinate; the maximum is achieved by fixing one coordinate and a symbol (Erdős–Ko–Rado-type result).
- For G = K2 specifically, Tn(K2) is a perfect matching on 2^n vertices, so α(Tn(K2)) = 2^{n−1} and χ(Tn(K2)) = 2.
- For general G, α(Tn(G)) has no simple closed form in general.
- Strong products of complete graphs and other easy special cases
- For G = Kq:
- Strong power: Sn(Kq) = K_{q^n}, so χ(Sn(Kq)) = q^n and α(Sn(Kq)) = 1.
- Cartesian power: as above, χ(Cn(Kq)) = q and α(Cn(Kq)) = q^{n−1}.
- Tensor power: χ(Tn(Kq)) = q and α(Tn(Kq)) = q^{n−1}.
So for such special graphs, all six quantities can be computed in time polynomial in n (they are given by simple formulas).
- Complexity “exponential in n”?
- If you allow G to vary, the problems are NP-hard already for n = 1 (both α and χ are NP-hard), so we cannot expect efficient algorithms in full generality.
- If you fix G and let n vary:
- There are families where all quantities are easy (e.g., G = Kq).
- For strong powers, exact α(Sn(G)) is tied to Shannon-capacity-type questions; no general polynomial-time algorithm is known, but there is also no established lower bound proving exponential time in n is necessary.
- For tensor powers, χ(Tn(G)) in general remains complicated post-Hedetniemi; again, no general polynomial-time algorithm is known, and no exponential-time lower bound in n is known.
- If one broadens the representation to general succinctly described graphs, exact α and χ become NEXP-hard (Papadimitriou–Yannakakis, 1986). However, the specific representations “G powered by a fixed graph product” are too special to inherit that hardness automatically; no such general lower bound is known for these power-graph families.
References and pointers:
- G. Sabidussi, Graph multiplication, Math. Z. 72 (1959), 446–457. [χ(G ☐ H) = max{χ(G), χ(H)}]
- C. E. Shannon, The zero-error capacity of a noisy channel, IRE Trans. Information Theory (1956).
- L. Lovász, On the Shannon capacity of a graph, IEEE Trans. IT-25 (1979), 1–7. [θ(G), multiplicativity under strong product]
- Y. Shitov, Counterexamples to Hedetniemi’s conjecture, Ann. of Math. 190 (2019), 663–667.
- J. H. van Lint, Introduction to Coding Theory, for Hamming graphs and q-ary codes (e.g., α(H(n,q)) = q^{n−1}).
- C. H. Papadimitriou and M. Yannakakis, A note on succinct representations of graphs, Information and Control 71 (1986), 181–185. [succinct hardness background]
Summary:
- χ(Cn(G)) is easy: equals χ(G).
- For special G (complete graphs, K2), α and χ across all three products have simple formulas computable in poly(n).
- For strong powers (α) and tensor powers (χ) in general, exact computation is closely tied to major open problems; there are no general efficient algorithms known and no proven exponential-in-n lower bounds for these specific families.