Strictly speaking they aren't a "part of calculus" - they aren't directly related to continuous rates of change - but that's a pretty arbitrary distinction and both are almost always taught as part of a calculus course.
For many people, integration is defined as "the area under the curve" and computed using sets of known rules. Infinite series are a tool we use to reason about calculus concepts more formally. It's a very arbitrary distinction; I was only making it because if you aren't studying calculus in depth you can avoid talking about Taylor polynomials and infinite series, but you can't really avoid talking about derivatives and integrals. If you want to define integrals formally, then you will definitely see infinite series.
A good def for the R. integral is: the one number which is both a lower bound of the set of all upper sums (area with superscribed rectangles), and an upper bound of all lower sums (inscribed rects).
That has a lot of nice properties which are useful in proofs, whereas the limit of sums definition suffers from there being too many ways for "delta x goes to 0" which need to be shown to be equivalent at some point to be useful (nasty!).
It's usually not really defined as the limit of a series in analysis texts, but as the simultaneous greatest lower bound and least upper bound of upper and lower sums (inscribed and circumscribed rects area), respectively. THat def is a lot easier to deal with in proofs and just as intuitive to many.