probabilistic Latent Semantic Indexing (pLSI) is a fundamental method for the analysis of text and related resources which is based on a simple statistical model. This method has high extendibility and scalability due to its simplicity. pLSI is also known as matrix factorization method such as Singular Value Decomposition(SVD) or Non-negative Matrix Factorization. Using pLSI, three matrices which include one diagonal matrix as SVD are achieved. The diagonal elements of this diagonal matrix represent singular values in SVD. However it is not entirely clear what the diagonal matrix of pLSI represents. Then it is also unclear whether the diagonalization constraint is necessary in pLSI. This question is the starting point of this paper. To make an answer for this question, we demonstrated that introducing off-diagonal elements to singular value matrix in pLSI is equal to permitting joint probability between different hidden variables. Although permitting joint probability in pLSI does not lose scalability and simplicity, our experiments demonstrated that this extension showed tolerance for over-learning and over-fitting problems.