Introduces the put-call parity as identified by Hans Stoll in 1969 as well as Python code for computing the put-call parity both numerically and symbolically.
Articles by Aaron Schlegel
Download 45,000 Adoptable Cat Images in 6.5 Minutes with petpy and multiprocessing
Using petpy and multiprocessing to download 45,000 cat images in 6.5 minutes
Introduction to petpy
Introduction to using the petpy Python library for interacting with the Petfinder API.
Combined Linear Congruential Generator for Pseudo-random Number Generation
Combined linear congruential generators, as the name implies, are a type of PRNG (pseudorandom number generator) that combine two or more LCGs (linear congruential generators). The combination of two or more LCGs into one random number generator can result in a marked increase in the period length of the generator which makes them better suited for simulating more complex systems.
Multiplicative Congruential Random Number Generators with R
Multiplicative congruential generators, also known as Lehmer random number generators, is a type of linear congruential generator for generating uniform pseudorandom numbers. The multiplicative congruential generator, often abbreviated as MLCG or MCG, is defined as a recurrence relation similar to the LCG.
Linear Congruential Generator for Pseudo-random Number Generation with R
Linear congruential generators (LCGs) are a class of pseudorandom number generator (PRNG) algorithms used for generating sequences of random-like numbers. The generation of random numbers plays a large role in many applications ranging from cryptography to Monte Carlo methods. Linear congruential generators are one of the oldest and most well-known methods for generating random numbers primarily due to their comparative ease of implementation and speed and their need for little memory.
Set Union and Intersections with R
The set operations 'union' and 'intersection' should ring a bell for those who've worked with relational databases and Venn Diagrams. The 'union' of two of sets A and B represents a set that comprises all members of A and B (or both).
Introduction to Sets and Set Theory with R
Sets define a 'collection' of objects, or things typically referred to as 'elements' or 'members.' The concept of sets arises naturally when dealing with any collection of objects, whether it be a group of numbers or anything else.
Hierarchical Clustering Nearest Neighbors Algorithm in R
Hierarchical clustering is a widely used and popular tool in statistics
Factor Analysis with the Iterated Factor Method and R
The iterated principal factor method is an extension of the principal
Factor Analysis with Principal Factor Method and R
As discussed in a previous post on the principal component method of factor analysis, the \(\hat{\Psi}\) term in the estimated covariance matrix \(S\), \(S = \hat{\Lambda} \hat{\Lambda}' + \hat{\Psi}\), was excluded and we proceeded directly to factoring \(S\) and \(R\). The principal factor method of factor analysis (also called the principal axis method) finds an initial estimate of \(\hat{\Psi}\) and factors \(S - \hat{\Psi}\), or \(R - \hat{\Psi}\) for the correlation matrix.
Factor Analysis with the Principal Component Method and R Part Two
In the first post on factor analysis, we examined computing the estimated covariance matrix \(S\) of the rootstock data and proceeded to find two factors that fit most of the variance of the data. However, the variables in the data are not on the same scale of measurement, which can cause variables with comparatively large variances to dominate the diagonal of the covariance matrix and the resulting factors. The correlation matrix, therefore, makes more intuitive sense to employ in factor analysis.
Factor Analysis with the Principal Component Method and R
The goal of factor analysis, similar to principal component analysis, is to reduce the original variables into a smaller number of factors that allows for easier interpretation. PCA and factor analysis still defer in several respects. One difference is principal components are defined as linear combinations of the variables while factors are defined as linear combinations of the underlying latent variables.
Image Compression with Principal Component Analysis
Image compression with principal component
Principal Component Analysis with R Example
Often, it is not helpful or informative to only look at all the variables in a dataset for correlations or covariances. A preferable approach is to derive new variables from the original variables that preserve most of the information given by their variances. Principal component analysis is a widely used and popular statistical method for reducing data with many dimensions (variables) by projecting the data with fewer dimensions using linear combinations of the variables, known as principal components.
Image Compression with Singular Value Decomposition
The method of image compression with singular value decomposition is
Singular Value Decomposition and R Example
SVD underpins many statistical and real-world
Cholesky Decomposition with R Example
Cholesky decomposition, also known as Cholesky factorization, is a
How to Calculate the Inverse Matrix for 2×2 and 3×3 Matrices
The inverse of a number is its reciprocal. For example, the inverse of 8
Categories
- Analysis
- Calculus
- Data Science
- Finance
- Linear Algebra
- Machine Learning
- nasapy
- petpy
- poetpy
- Python
- R
- SQL
- Statistics