Polynomials for dummies

Reddit For Dummies

2013.08.15 03:51 ddshroom Reddit For Dummies

Reddit For Dummies!!! Where redditors teach redditors how this thing works. Feel free to post anything that helps.
[link]


2022.06.19 06:13 GrowingForDummies

A safe growing environment from beginners to experts. No judgment on here, as we all had to start somewhere. Show your grows and share your knowledge!
[link]


2017.07.22 20:05 CnC_Connoisseurs Where Average Dabbers Get Answers, Informational Videos

This area is for Beginner to Moderate Recreational Marijuana and Cannabis Extract Users focused mainly on Dabbing. Commercial Extractors, Everyday Blasters, and Dudes that “Put Out Just Fkn Fire!!” everyday, may not like it here, just ride your high horse over to another / to discuss 50 gram slabs, T.H.C Isolate, and pure terpene extraction, CHEERS!!
[link]


2024.03.14 16:37 sketchyeh Pre-Calculus -- is it possible with little foundational skills?

Hello friends,
Long story short, I've been taking a foundational math course to prep for a required pre-calc course in my university (I'm Canadian, if that matters). This course is your standard four months, but my pre-calc class is in the spring semester, which is only a month long.
Unfortunately, due to circumstances outside of my control regarding my housing and safety, I pretty much missed an entire month of school and fell aggressively behind. I've been trying to catch up, but I am in tears every time I try to do the work because I just can't make the concepts click. I have ADHD and it makes remembering the steps or what certain numbers mean a challenge (for me personally), and it gets very very difficult for me to push through. At this point, I'm debating withdrawing from the class.
My question then, is this: is it possible to get a 60% (a C-) in a pre-calc class when I have this much of a struggle learning things like polynomials, or understanding when something is/isn't prime, square roots, factoring ect.
My rationale is: I have the textbook for both courses, "pre calculus for dummies" and instead of juggling 4 courses over 4 months, I'd be focusing solely on pre-calc for one month. In theory, I should be able to look up things as I come across them; but I don't know.
Please be kind. I love science, I want to love math, I like to understand it, and I usually have a great attitude about learning. I'm just feeling stuck.
submitted by sketchyeh to learnmath [link] [comments]


2024.02.03 07:04 a_bayesian New 28 year coach RAPM, evidence coaches influence defense more than players, and triple-adjusted player RAPM follow-up

These three related topics are all from recent Jeremias Engelmann tweets about his work with RAPM. For those unfamiliar, RAPM is a plus minus stat adjusted for everyone on the court, and is the most predictive player stat which doesn't use any box score data. Engelmann has worked in the analytics departments of multiple NBA teams, is one of the original pioneers of RAPM, and invented RPM.

28 year Coach RAPM

This non-traditional version of RAPM treats coaches like a 6th man on the court to estimate their impact, and is rubber-band and player age adjusted (more on those adjustments later).
As Engelmann points out in his tweet, it should be taken with a grain of salt. The largest issue is something called collinearity, which in this case basically means that because the coach is always considered on the court, there is limited "off-court" data to compare with (basically when their players play for other coaches), which makes it tougher for the regression to tease out player vs coach effects. This is especially bad for those who coached less years or less players, so more caution should be had interpreting those results.
That being said, I think this is a very interesting stat, especially since coach impact can be quite difficult to judge any other way.
Here are the top 20 coaches from 1997 to 2024 in RAPM:
+ Coach Offense Defense Total
1 Phil Jackson 2.4 -4.2 6.6
2 Mike Fratello 0.1 -4.7 4.8
3 Jeff Van Gundy -0.3 -4.8 4.5
4 Tom Thibodeau 0.4 -4.1 4.4
5 Ime Udoka 0.8 -3.4 4.2
6 Scott Skiles 0.3 -3.9 4.2
7 Larry Bird 0.5 -3.2 3.8
8 Billy Donovan 1.5 -2.2 3.7
9 Michael Malone 1.5 -2.1 3.6
10 Steve Kerr 1.8 -1.8 3.5
11 Mike Budenholzer 0.8 -2.6 3.4
12 Stan Van Gundy 0.2 -3.1 3.3
13 Doc Rivers 0.6 -2.6 3.2
14 Brad Stevens 0.7 -2.4 3.1
15 Mike Brown 0.4 -2.7 3.1
16 Jim O'Brien 0.1 -3 3.1
17 Doug Collins 0.4 -2.5 2.9
18 Gregg Popovich 0.5 -2.3 2.7
19 George Karl 1.4 -1.3 2.7
20 Hubie Brown 1.2 -1.5 2.7
Here's a link to the full spreadsheet: https://docs.google.com/spreadsheets/d/1aCMwq4qu_m1-Tzc63CmRxQ2H9qq5vFUMpZViJanPxfw/
Interesting to see Jackson so far ahead. The top 3 active coach list of Thibodeau, Udoka, and Donovan(!) is surprising. Popovich (18th of 182) and Spoelstra (25th) aren't as high up as expected. Billups ranks 182nd out of 182 at -6.1, with the next lowest active coach being Jacque Vaughn at -1.5.

Evidence that coaches influence defense more than players

The above data shows that coaches are having a large effect on defense, much more than offense. I'll quote Engelmann's tweets for the next part:
tldr:
Coaches, if treated as a 6th man on the court, have more influence on the defensive performance of a team, than players do.
Knowing who the coach is, will lead to better prediction results of a 5-man unit's defensive performance, than knowing who the 5 players are
Technical details:
In penalized regression we determine certain factors ("alphas") that indicate the level of noise/information for certain variables. Lower values mean less noise: we will allow more movement for that variable's coefficient.
For players, these values have hovered around 3k, usually slightly lower on off.
We can determine these factors for every "variable group" {players off, players def, coach off, coach def} separately, using crossvalidation to determine which set of alphas leads to best predictions
This led to the following alphas:
Players Off: 2000
Players Def: 8000
Coaches Off: 8000
Coaches Def: 2000
Issues with this approach:
  • it's not crediting the coach for playing the best players (only for influencing those that do play)
  • age influence is assumed to be the same for everyone (could work around this but need more RAM)
  • a coach is assumed to "perform the same" no matter how many years the data points are apart
From that explanation, the huge data set, the size of the effect, and Engelmann's reputation, this is a very strong/robust result. I've always thought coaches had a big impact on defense but for it to be significantly more than players (in the NBA over these years) is surprising. I also think it's interesting that coaches appear to have more influence on defense by the same amount that players have more influence on offense.

Triple-adjusted 28 year player RAPM

This part of the post is a follow-up to the unadjusted 28 year RAPM that was posted here 3 days ago. That post also has a good explanation of how to interpret player RAPM numbers, basically like a much better version of on/off: https://www.reddit.com/nbadiscussion/comments/1aextmx/new_lifetime_rapm_rankings_1997_includes_playoffs/
This time around there are three major adjustments as listed in Engelmann's tweet.
First is the rubber-band adjustment, which compensates for the fact that teams play significantly better when they are losing and worse when they are winning. Here's a cool visual of this effect which compares results to point spreads, while Engelmann's adjustment is made by comparing results to predictions from unadjusted RAPM.
Second every player's performance is adjusted for age. Engelmann's explanation: "I compute coefficients for age dummy variables, then do weighed polynomial fit on the results".
Third there is a coach adjustment based on the RAPM values shown earlier in this post.
The goal of these adjustments according to Engelmann:
It tries to answer "who would be the best if everyone was the same age"
Here are the top 20 players in this adjusted 1997-2024 RAPM:
+ Player Off Def Tot
1 LeBron James 8.4 -2.5 10.8
2 Kevin Garnett 3.8 -5.7 9.5
3 Chris Paul 6.6 -2.7 9.4
4 John Stockton 7.5 -1.7 9.2
5 Stephen Curry 7.5 -1 8.5
6 Manu Ginobili 6.1 -2.5 8.5
7 Nikola Jokic 7.3 -1.2 8.5
8 Dirk Nowitzki 7.1 -1.3 8.4
9 Tim Duncan 3.8 -4.4 8.2
10 Kawhi Leonard 5.8 -1.9 7.7
11 Shaquille O'Neal 6.2 -1.1 7.3
12 Kevin Durant 6.2 -0.9 7.2
13 Michael Jordan 6.8 -0.3 7.1
14 Vince Carter 5.2 -1.9 7.1
15 Jason Kidd 4.5 -2.5 7
16 Jayson Tatum 5 -2 7
17 Draymond Green 2.3 -4.7 7
18 Joel Embiid 3.5 -3.3 6.9
19 Dikembe Mutombo 2 -4.8 6.9
20 Paul George 3.1 -3.6 6.7
Here's a link to the full spreadsheet: https://docs.google.com/spreadsheets/d/1pGTFzq0eE85AP5wW8v8yFzRiJn_lfSCAzh7hd4czQI4/
Lebron easily takes the top spot, with KG in second due to by far the best defense. People don't make enough of the fact that CP3 looks way better than consensus in both purely plus minus and purely box score stats which are completely independent from each other.
Stockton rising the most in the top 20 compared to the unadjusted RAPM shows how well he aged, but probably shouldn't be taken at face value since there are no prime years included. Draymond's large drop and Steph's modest rise make the list look more realistic. Jokic drops from 1st to a still respectable 7th because these are mostly prime years and Michael Malone is rated as a top ~5% coach.
submitted by a_bayesian to nbadiscussion [link] [comments]


2023.12.09 03:33 No-Dinner-9691 Help w

what test would you use to test whether the relation between a trait (a numeric variable measured via a rating scale) and a numeric outcome variable might be moderated by the experiment condition (a non-numeric variable with four levels)? I was thinking to use ANCOVA (i.e., lm(outcome~trait*condition,data)) but I’m not sure what contrast codes to assign for the experiment condition variable… not sure if it would be polynomial codes as the condition is sort of ordered? Or would it be effect or dummy coding?
submitted by No-Dinner-9691 to RStudio [link] [comments]


2023.10.03 21:34 ifailedmathclass69 Help understanding multivariable polynomials?

To make a long story short, my younger brother just started his ninth year of school and has begun learning algebra. My family (including me) has never been very good when it comes to math, yet I’d like to make his school life a little easier than what I had to deal with by helping him understand how “multivariable polynomials” work.
I never went to an english school so I don’t know if that’s what they’re actually called or not, but I’ll give two examples of the equations that he’s dealing with.
  1. 2(6m-3n)(3m-n)-(5m+2n)(6m-n)
  2. (b-2)(b-3)+(b-4)(b-6)-(b-9)(b-8)
I’d be able to help him if it were something like “5(3m+2)+(8-5m)”, but I never got the chance to do what he’s doing so I’m pretty useless when it comes to this.
I even tried watching several videos for “Dummies” where they explain it, yet I couldn’t quite figure out how they got the answer.
To be specific, I’m not asking for the answers to the example equations, I’m trying to figure out what the formula one uses to solve these types of equations is.
(I sincerely hope that whoever looks at this, can understand what I’m trying to ask for help with. I’m not the best with English, but I’ll try to explain even more if this confuses people.)
submitted by ifailedmathclass69 to askmath [link] [comments]


2023.10.03 21:33 ifailedmathclass69 Help understanding multivariable polynomials?

To make a long story short, my younger brother just started his ninth year of school and has begun learning algebra. My family (including me) has never been very good when it comes to math, yet I’d like to make his school life a little easier than what I had to deal with by helping him understand how “multivariable polynomials” work.
I never went to an english school so I don’t know if that’s what they’re actually called or not, but I’ll give two examples of the equations that he’s dealing with.
  1. 2(6m-3n)(3m-n)-(5m+2n)(6m-n)
  2. (b-2)(b-3)+(b-4)(b-6)-(b-9)(b-8)
I’d be able to help him if it were something like “5(3m+2)+(8-5m)”, but I never got the chance to do what he’s doing so I’m pretty useless when it comes to this.
I even tried watching several videos for “Dummies” where they explain it, yet I couldn’t quite figure out how they got the answer.
To be specific, I’m not asking for the answers to the example equations, I’m trying to figure out what the formula one uses to solve these types of equations is.
(I sincerely hope that whoever looks at this, can understand what I’m trying to ask for help with. I’m not the best with English, but I’ll try to explain even more if this confuses people.)
submitted by ifailedmathclass69 to Algebra [link] [comments]


2023.10.03 21:32 ifailedmathclass69 Help understanding multivariable polynomials?

To make a long story short, my younger brother just started his ninth year of school and has begun learning algebra. My family (including me) has never been very good when it comes to math, yet I’d like to make his school life a little easier than what I had to deal with by helping him understand how “multivariable polynomials” work.
I never went to an english school so I don’t know if that’s what they’re actually called or not, but I’ll give two examples of the equations that he’s dealing with.
  1. 2(6m-3n)(3m-n)-(5m+2n)(6m-n)
  2. (b-2)(b-3)+(b-4)(b-6)-(b-9)(b-8)
I’d be able to help him if it were something like “5(3m+2)+(8-5m)”, but I never got the chance to do what he’s doing so I’m pretty useless when it comes to this.
I even tried watching several videos for “Dummies” where they explain it, yet I couldn’t quite figure out how they got the answer.
To be specific, I’m not asking for the answers to the example equations, I’m trying to figure out what the formula one uses to solve these types of equations is.
(I sincerely hope that whoever looks at this, can understand what I’m trying to ask for help with. I’m not the best with English, but I’ll try to explain even more if this confuses people.)
submitted by ifailedmathclass69 to learnmath [link] [comments]


2023.09.10 11:01 djminger007 Cryptape and Zero Knowledge

Cryptape and Zero Knowledge
Privacy is being thought of!!

https://blog.cryptape.com/building-a-zero-knowledge-virtual-machine-on-ckb-vm-for-privacy-preserving-and-secure-computing

https://preview.redd.it/o6fl5jwl7enb1.png?width=1600&format=png&auto=webp&s=4e72c39eead36c27dc881ba77931fcc04809a944

Building a Zero-Knowledge Virtual Machine on CKB-VM for Privacy-Preserving and Secure Computing

Cryptape·Jun 5, 2023·
16 min read

Introduction

Zero-knowledge proofs (ZKPs) are cryptographic protocols that enable a prover to prove the truth of a statement to a verifier without revealing any additional information beyond the statement's validity. This technology has a wide range of applications, including privacy-preserving computations, authentication, and secure data sharing.
A Zero-knowledge Virtual Machine (ZK-VM) is a powerful application of ZKPs that can execute programs written in its instruction set and prove the validity of its execution to the verifier without revealing sensitive data. This makes ZK-VMs a useful tool for privacy-preserving computations and secure decentralized applications. Additionally, with ZKPs, the verifier can verify that the program was executed correctly without needing to perform the computation locally, saving significant computational resources. The ZK-VM's proof can be verified using only a small amount of computation, making it possible to perform secure computations on devices with limited processing power or in resource-constrained environments.
In this article, we will share our experience in implementing a ZK-VM prototype based on the Brainfuck language using the Halo2 library. The project is available in our repository. Our prototype enables us to validate the execution of an arbitrary BF program in the CKB-VM. Rather than focusing on cryptography theories or complex mathematical formulas, this article emphasizes an intuitive understanding of the ZK-VM and its implementation.

A BF-VM

Brainfuck (BF) is a minimalistic programming language that uses only eight commands to manipulate a tape of cells, each initially set to zero. A regular BF-VM uses a memory pointer (mp ) and an instruction pointer (ip ) to maintain its state. For the purpose of zero-knowledge proof, we also require a set of extra registers to be maintained. For this article, the following registers are mentioned:
  1. a memory value register (mv ) stores the current value pointed by the memory pointer.
  2. a cycle register (cycle ) stores the current cycle of the execution
The BF language is Turing complete, meaning it can compute any computable function given enough time and memory.
The eight commands in BF are:
  1. > Move the memory pointer to the right
  2. < Move the memory pointer to the left
  3. + Increment the value at the memory pointer
  4. - Decrement the value at the memory pointer
  5. . Output the value at the memory pointer
  6. , Input a value and store it at the memory pointer
  7. [ If the value at the memory pointer is zero, jump forward to the matching ]
  8. ] If the value at the memory pointer is nonzero, jump back to the matching [
For example, a classic Hello, World! program in BF would look like this:
COPYCOPY
++++++++++[>+++++++>++++++++++>+++>+<<<<-] >++.>+.+++++++..+++.>++.<<+++++++++++++++. >.+++.------.--------.>+.>. 

KZG-based Halo2


https://preview.redd.it/kelitmnx7enb1.png?width=2000&format=png&auto=webp&s=8e9c267aff6242df26111ea41cb8f54a25e9137f
Image sourced from Twitter
Halo2 is an implementation of a zero-knowledge proving system written in Rust that utilizes advanced techniques in the field of zero-knowledge proofs. Its high-level interface for building circuits and generating proofs makes it easy to develop custom zero-knowledge proof systems in Rust. By default, Halo2 uses PLONKish arithmetization to build circuits and the Inner Product Argument (IPA) as its commitment scheme, as shown in the figure above. However, for our purposes, we opted to use the KZG variant provided by the Scroll team. The KZG commitment scheme offers superior efficiency in terms of proof size, proving time, and verification time, theoretically achieving constant verification time and proof size regardless of the input size. However, a trusted setup phase is required in order to use KZG-based pairing, where trusted parties perform a setup ceremony to generate the initial parameters for the proof generation process. It is crucial to execute this setup phase accurately and securely to prevent compromising the system's security. Despite the downside of a trusted setup, we believe the performance benefits of the KZG-based pairing outweigh this requirement.

Proving a BF Program

Our BF-VM aims to execute BF programs and generate a trace. This trace serves as input for Halo2 circuits, enabling the generation of a proof. The proof can subsequently be validated by a verifier deployed on the CKB blockchain. Additionally, Halo2 incorporates the Fiat-Shamir transformation to ensure a non-interactive protocol. To illustrate the general workflow of a ZK-VM, please refer to the following flowchart.

https://preview.redd.it/1v6384608enb1.png?width=2000&format=png&auto=webp&s=c6c8fde4a47fe739732ac5a6c02762657f391576

Trace

A trace consists of the history of the execution, including the state of the VM at each cycle. Specifically, our ZK-VM needs to produce the following information:
  1. Processor table A table that contains the register information at each cycle, sorted by cycle.
  2. Memory table A table that contains the same information as the processor table, but is sorted by memory pointer and then cycle.
  3. Instruction table A table which is the union of the program code and processor table.
  4. Input/Output table A table contains the program's input and output in their natural order.
  5. Program table A table that contains the program instructions.
Our proof system first needs to validate the correctness of each table in the trace. This can be done through the Halo2 circuit system. In the following examples, we will demonstrate a couple of proofs written in Halo2 circuits to provide a better understanding of what we are trying to prove and how we do it.

Example 1: Proving the consistency of memory pointer.

In this example, we want to prove that the states of the memory pointer in our trace are correct. This utilizes our Memory Table mentioned in the previous section. Observe that the Memory Table is sorted by the memory pointer, and the BF's semantics only allow the memory pointer to be increased or decreased by one for a single cycle. Therefore, for a valid Memory Table, we must have the following constraint satisfied:
For each row in the table, the memory pointer increases by one or zero.
We then translate this observation into constraints written in Halo2:
COPYCOPY
cs.create_gate("M0: memory pointer either increase by one or by zero", vc { // memory pointer in current cycle let cur_mp = vc.query_advice(mp, Rotation::cur()); // memory pointer in next cycle let next_mp = vc.query_advice(mp, Rotation::next()); // A selector to control whether a constraint should be active or not let s_m = vc.query_selector(s_m); // The polynomials representation of our constraint vec![s_m * (next_mp.clone() - cur_mp.clone() - one.clone()) * (next_mp.clone() - cur_mp.clone())] }); 
The s_m selector is Halo2-specific and tells the Halo2 system whether a constraint is active. Ignoring that for a moment, we can see that at the end, we formalize the constraint through a polynomial function: (������−�����−1)×(������−�����) . The polynomial evaluates to zero only when either the left part evaluates to zero or the right part evaluates to zero. In particular, the left part evaluates to zero iff the memory pointer is increased by exactly one in the next cycle. The right part evaluates to zero iff the memory state is the same in the current and next cycle. This polynomial perfectly reflects our observation at the beginning.

Example 2: Proving the execution is overflow-free

In this example, we aim to ensure that our execution is overflow-free. Since our BF-VM operates within an 8-bit domain, valid values range from 0 to 255. When we increase or decrease a value that is already at the limit, we expect it to wrap around, meaning 0−1 becomes 255 and 255+1 becomes 0.
We first validate the domain, which can be accomplished using the lookup table feature built into Halo2. A lookup table allows us to specify a range and validate that a set of values are within that range:
COPYCOPY
fn load_table(&self, layouter: &mut impl Layouter) -> Result<(), Error> { layouter.assign_table( "load range-check table", mut table { let mut offset = 0; for value in 0..RANGE { table.assign_cell( "value", self.table, offset, Value::known(Fr::from(value as u64)))?; offset += 1; } Ok(()) }, ) } 
This Rust code defines a function that loads a lookup table used to validate a range of values. The function takes a mutable reference to a Halo2 Layouter object, which is used to assign values to the lookup table. The assign_table method initializes the lookup table with a set of values within the specified range. For our cases, valid values are 0 to 255. The values are assigned to the lookup table by calling the assign_cell method on the table object, which is passed as an argument to the closure.
Then, we incorporate the lookup table into our circuits using the lookup function:
COPYCOPY
cs.lookup("Range-Check: mv are within 0-255", vc { let s_lookup = vc.query_selector(s_lookup); let mv = vc.query_advice(mv, Rotation::cur()); // lookup_table.table is our lookup table vec![(s_lookup * mv, lookup_table.table)] }); 
Here, s_lookup serves as our selector, and the final polynomial ensures that the memory value at each cycle falls within the range specified in the lookup table.
Next, we validate that the regular operations that modify the memory value are correct. In BF's instruction set, + and - increase or decrease the memory value by one, respectively. Therefore, we create a constraint in the processor table as follows:
COPYCOPY
cs.create_gate("Add and Sub operate correctly", vc { let expr_add = deselectors[ADD].clone() * (next_mv.clone() - cur_mv.clone() - one.clone()); let expr_sub = deselectors[SUB].clone() * (next_mv.clone() - cur_mv.clone() + one.clone()); vec![expr_add * expr_sub] } 
A deselector for op evaluates to zero iff current_instruction != op . This is particularly useful when validating each of BF's instruction semantics and only activating the correct constraint for the corresponding instruction. In other words, the deselector[ADD] evaluates to a non-zero value only when the current instruction in the instruction pointer is ADD (+ ). Therefore, to satisfy the constraint, the right-hand side of the polynomial must return zero. Combining these facts, the constraint we created (expr_add ) for ADD operations says that (1) if the current operation is not ADD , the constraint evaluates to zero (through the deselector), and (2) if the current operation is ADD , then the difference between the memory values at the current and next cycle must be one. The constraints on the SUB operation act similarly.
Finally, to validate the wrap-around behavior, we only need to add another scenario for the ADD and SUB operations:
COPYCOPY
cs.create_gate("Add and Sub operate correctly", vc { // range_max = 255 let expr_add = deselectors[ADD].clone() * (next_mv.clone() - cur_mv.clone() - one.clone()); * (next_mv.clone() - cur_mv.clone() + range_max.clone()) let expr_sub = deselectors[SUB].clone() * (next_mv.clone() - cur_mv.clone() + one.clone()); * (next_mv.clone() - cur_mv.clone() - range_max.clone()) vec![expr_add * expr_sub] } 
The extra polynomial specifies that for ADD operations, it is also okay for the memory value to be decreased by 255 at the next cycle. This only happens when the current memory value is 255, and the wrap-around causes the result of 255+1 to be 0. A careful reader might ask whether this could also happen for other cases, such as 102+255=357, resulting in a false-positive. However, recall that we have a lookup table to ensure values are always within 0 and 255, hence securing our constraints.

Proving all tables refers to the same program

So far, we have demonstrated through several examples how to prove that each table within the trace is internally consistent. However, this alone does not guarantee a valid trace. For instance, a malicious prover could provide a program table from a "Hello World" program and a memory table from a Fibonacci program, both of which can be proven to be internally correct but together do not constitute a valid trace. Therefore, we must prove that all of the tables point to the same program, i.e. all tables are connected.
This can be achieved through two powerful functions: a permutation running product (PRP), which proves that two tables are permutations of each other, and a running evaluation (RS), which proves that two tables are identical. We will not delve into the details of how to write these constraints in Halo2 here, but the basic idea is as follows:
  1. The process and memory tables contain the same data but are sorted differently, so a PRP check can be used to prove that these two tables are permutations of each other.
  2. The instruction table contains the static program and runtime execution history. Specifically, the part that contains the execution history must be identical to the processor table, which can be proven through an RS check. Similarly, the part that contains the static program must be identical to the program table. We can extract the static program and runtime execution part separately by using polynomial constraints.
  3. The input table contains all the user inputs. The program must then read these inputs, which can be found in the rows of the processor table where the instruction pointer contains GET_CHAR (, ). By carefully extracting these parts using the deselector, they must be identical to the input table through RS check. Similarly, we can use the deselector to extract the part with PUT_CHAR (. ), which must be identical to the output table.

Final Step

Before we can claim victory, there is one more step we need to take. While we have already proven, from the prover's perspective, that we hold a valid trace with consistent internal states that all refer to the same program, we still need to ensure that the trace we have proven is the one that the user expects. Note all the proofs so-far are zero-knowledge, meaning only the validity of the statement is proven and nothing else about our trace is revealed. Hence we cannot guarantee that a malicious prover won't provide a valid trace of a malicious program while the user expects a trace from a "Hello World" program.
To achieve this, we can utilize the instance column in Halo2. An instance column allows the verifier to provide some public data that can be used for the proof. For our use case, by making the program code and program input into an instance column, and writing the circuits so that the program table and input table are extracted from those public data, we eliminate the possibility for a malicious prover to modify what is being proven. This is because public data must be provided by the verifier, ensuring that the proof is based on the expected program and input.

Verifying a Concrete Program

In this section, we provide a hands-on demonstration of how to use our toolchain to verify a simple BF program.
For our example, we use a basic echo program written in the BF language. This program takes user input that ends with ^] (ASCII code 29) and prints it to standard output. While the program seems simple, the trace it produces is quite long due to the low-level nature of the BF language. For example, when echoing input like "Hello World!", the resulting trace has 217 rows.
To begin, we clone and build the project:
COPYCOPY
git clone  cd ckb-bf-zkvm && make install && make all 
After that, we proceed with running the prover to execute the echo program. This step is conducted off-the-chain and involves invoking our BF-VM to execute the program, retrieve the trace, and generate the corresponding proof. This step is also the most expensive of all.
COPYCOPY
RUST_LOG=info cargo run --release --package ckb_bf_prover -- res/echo.bf 'hello world!^]' 
Behind the scene, the prover invokes the BF-VM to execute the program and obtain the trace:
COPYCOPY
let mut f = std::fs::File::open(&args[1])?; let mut c: Vec = Vec::new(); f.read_to_end(&mut c)?; let mut i = Interpreter::new(); let code = code::compile(c.clone()); i.set_input(code::easygen(&args[2])); i.set_code(code.clone()); i.run(); let trace = i.matrix; 
For this prototype, we will skip the trusted setup and use a dummy parameter instead:
COPYCOPY
// where k is the exponent of the size of the trace (2^k) let s = Fr::from_u128(GOD_PRIVATE_KEY); let general_params = ParamsKZG::::unsafe_setup_with_s(k, s); 
From the parameter, we can generate the corresponding proving key for the prover:
COPYCOPY
let pk = keygen_pk(&general_params, vk, &circuit).expect("keygen_pk"); 
Next, we generate the transcript and create the proof. A transcript is an abstract object that holds the proof we will later send to the verifier:
COPYCOPY
let mut transcript = Blake2bWrite::<_, G1Affine, Challenge255<_>>::init(vec![]); create_proof::< KZGCommitmentScheme, ProverSHPLONK<'_, Bn256>, Challenge255, XorShiftRng, Blake2bWrite, G1Affine, Challenge255>, MyCircuit, >( &general_params, &pk, &[circuit], &[&public_inputs], rng, &mut transcript, ) .expect("create_proof"); 
Finally, we serialize the program, input, proof, parameter, and verifying key into a dummy transaction file:
COPYCOPY
build_ckb_tx( &proof[..], &verifier_params_buf[..], &vk_buf[..], &code_u8[..], &raw_input[..], "target/riscv64imac-unknown-none-elf/release/ckb_bf_verifier", ); 
We then verify the transaction file using the verifier:
COPYCOPY
ckb-debugger --tx-file res/tx.json --cell-index 0 --cell-type input --script-group-type lock --max-cycles 20000000000 
Behind the scene, the verifier deserializes the proof object and verifies the proof using the Halo2 library:
COPYCOPY
let mut verifier_transcript = Blake2bRead::<_, G1Affine, Challenge255<_>>::init(&proof_buffer[..proof_len]); let strategy = SingleStrategy::new(&verifier_params); // vk is deseralized from the transcation. instances are the public input let res = verify_proof::< KZGCommitmentScheme, VerifierSHPLONK<'_, Bn256>, Challenge255, Blake2bRead<&[u8], G1Affine, Challenge255>, SingleStrategy<'_, Bn256>, >(&verifier_params, &vk, strategy, &[&instances], &mut verifier_transcript); 

Tuning Halo2 for CKB-VM

While we have a prover and verifier that can handle any BF program and return a proof using Halo2, there are practical issues with the KZG-based commitment scheme's supposed constant verification time and proof size. In the following few paragraphs, we explain how we tune the Halo2 library for CKB-VM for optimal performance.

Reducing VK Size and Memory Consumption

The size of the verification key (VK), which is part of the proof, is largely determined by the number of selectors used in the circuits. During serialization, these selectors are written into the VK, resulting in a large amount of data for longer traces. Although selectors are binary values and can be compressed into a bit format, the amount of data is still substantial. Halo2 did implement a compression algorithm to compress selectors into SelectorAssignment objects, but the Halo2 team did not provide a serialization method for it, leading to the need for re-computation on the verifier's side. This approach increases both the VK size and the memory consumption of the verifier, since the compression algorithm is quite memory-intensive. In our fork version of Halo2, we have provided a complete serialization of SelectorAssignment . This means we perform the compression algorithm only once on the prover's side, eliminating the need for re-computation on the verifier's side. As a result, we have a constant 2Kb of VK size and have avoided potential out-of-memory issues on the CKB-VM.

Improving Verification Time

In previous sections, we mentioned that the verifier must provide the program code and input as part of the public data to validate that the correct program has been proven by the prover. However, public data comes with a cost. Since the verifier is responsible for providing the public data, it must perform the necessary commitment scheme on those data instead of leaving it to the prover. This means that the larger the public data, the more computation is required for the verifier. However, there is a trick that we can use to minimize this extra cost. Instead of providing the entire program and input as public data, we can only provide a hash value of those data. For our implementation, we have chosen the Poseidon hash, which is an efficient curve-friendly hash function. A user who wants to validate an execution trace on the CKB-VM must provide the Poseidon hash of the public input. This means that the size of the public input is always constant. On the prover side, we use the hash gadget built inside the Halo2 library, which computes the Poseidon hash of a set of data and proves the hash computation's correctness. We use this gadget to compute a hash value from the program table and input table. Finally, we add an extra constraint in the circuit stating that the proof can only be validated when the hash value provided in the public data is consistent with the value computed from the program and input table, meaning the correct program has been proven.
This approach adds extra computation steps for the prover, which is not critical as the prover is usually assumed to run on a more computationally powerful machine. The important thing is that the verifier can now easily compute its public data and complete the proof. From our experiments, the cycle cost on the CKB-VM drops from 170M to 130M when using hash value as the public data.

Conclusion

Halo2 is a versatile framework that provides a robust set of tools for developing zero-knowledge proof systems for various use cases. By incorporating a simple ZK-VM with CKB-VM using Halo2, we have shown how this framework can be leveraged to extend CKB-VM's applications. As the field of zero-knowledge proofs continues to evolve, we anticipate more advanced techniques and frameworks emerging to support more complex proof systems, opening up new opportunities for secure and private computation.
✍🏻 by Xiaowen Hu, Jiandong Xu, Biao Yi, Wanbiao Ye
submitted by djminger007 to NervosNetwork [link] [comments]


2023.03.17 00:08 ei9880 Multiple Regression Model not coming out linear

I am doing a multiple regression in Excel using data analysis. I have past months of data where my output is based on two variables. As these variables increase, so should my output. In other words, I want this to be a linear model.
As I test my function with dummy data, I notice that one of my variables increasing cause my output to decrease. I have no clue why as both charts (for each variable) show a linear trend. Even if it turns out to be a polynomial, I want it to stay linear. Why on Earth is my output decreasing when one of my variables increase???
submitted by ei9880 to excel [link] [comments]


2022.07.08 16:50 OneMeterWonder Ideas to help struggling students

Hi there!
First off, let me say that I'm looking for specific actions I can implement in my courses to help my students get through conceptual blocks.
Now, I am a college instructor teaching courses around the calculus level and above. The last several times I've taught these courses, I've noticed that a disturbing proportion of my students simply don't seem to understand quite basic concepts. This is not to disparage them, but it is a real problem that
  1. makes it incredibly difficult for them to comprehend the current material, and
  2. makes it even more difficult for me to teach them as well as the handful of students that actually do seem to be caught up.
I've been teaching for a short while now and have developed what appear to be fairly effective methods of building confidence and understanding. But what I'm seeing more and more frequently is that students are increasingly unprepared for the difficulty of the material in my courses.
Some examples of issues I've seen are
and
I truly and genuinely care about my students' well-being and education, but this has never been anything less than astounding to me. I just have no real clue how I can actually do anything serious to help these students short of outright telling them they have serious deficiencies in their understanding. How can students this far behind be passed through prerequisite classes to my course that also required these concepts as prerequisites? From my perspective they are at least two to three courses beyond where their level of comprehension lies, and I cannot ethically do anything less than fail them.
So what sorts of things have you all tried that hopefully relieve some of the problems on the above list? There seems to be only so much I can do to teach the actual material while reteaching them material that should have been covered in primary and secondary school.
Sincerely,
- A really frustrated instructor who is tired of just venting
submitted by OneMeterWonder to matheducation [link] [comments]


2022.06.27 10:26 Damsauro How to interpret multivariate polynomial regression coefficients?

Hello! I tried getting a better fit than a multiple linear regression model's fit, so I applied (hopefully correctly), a polynomial regression with multiple (4) predictor variables and 1 dependendent variable. I got the following results:
best = lm(acti ~ poly(conec,6) + poly(depen,6) + poly(educa,6) + poly(masc, 1), data=df) summary(best) Call: lm(formula = acti ~ poly(conec, 6) + poly(depen, 6) + poly(educa, 6) + poly(masc, 1), data = df) Residuals: Min 1Q Median 3Q Max -1.27984 -0.22568 -0.01831 0.17042 0.78802 Coefficients: Estimate Std. Error t value Pr(>t) (Intercept) 2.32877 0.05024 46.350 <2e-16 *** poly(conec, 6)1 -1.13185 0.52546 -2.154 0.0358 * poly(conec, 6)2 0.01121 0.46742 0.024 0.9810 poly(conec, 6)3 1.06116 0.49711 2.135 0.0374 * poly(conec, 6)4 -0.47347 0.48510 -0.976 0.3335 poly(conec, 6)5 -0.18415 0.51739 -0.356 0.7233 poly(conec, 6)6 0.16680 0.47720 0.350 0.7281 poly(depen, 6)1 -1.13595 0.54095 -2.100 0.0405 * poly(depen, 6)2 -0.99386 0.54163 -1.835 0.0721 . poly(depen, 6)3 -0.24898 0.53813 -0.463 0.6455 poly(depen, 6)4 -0.48235 0.59941 -0.805 0.4246 poly(depen, 6)5 0.45644 0.65555 0.696 0.4893 poly(depen, 6)6 -0.69786 0.55371 -1.260 0.2131 poly(educa, 6)1 -0.01625 0.56867 -0.029 0.9773 poly(educa, 6)2 0.08472 0.50804 0.167 0.8682 poly(educa, 6)3 0.56574 0.53240 1.063 0.2928 poly(educa, 6)4 0.96280 0.54844 1.756 0.0849 . poly(educa, 6)5 -0.13807 0.65780 -0.210 0.8346 poly(educa, 6)6 -0.56441 0.63831 -0.884 0.3806 poly(masc, 1) 0.50123 0.48196 1.040 0.3031 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 
So, as opposed to a multiple linear regression model, the coefficients contradict(?) each other. Am I supposed to average or add up the coeficients to see if it affects the model positively or negatively? How am I supposed to deal with different p-values, averaging or adding them up aswell?
If this is not too feasible I'll just stick with MLR but it's worth a shot.
By the way, the "masc" variable stands for "masculine", and its a dummy variable, therefore I don't think it can support more than 1 dimension.
Thanks!!
submitted by Damsauro to AskStatistics [link] [comments]


2022.04.20 04:54 Relevant-Ad2254 [Question]how do i find out if Michigan tech Masters in Applied Statistics legit?

First, I want to mention that I DO NOT CARE ABOUT PRESTIGE. i have no issues getting interviews and i work as a data analyst role in my company and can easily tranition into a data scientist role if i learned that required statistics

bottom line thought, all i care about is that i want to learn the statistics and what's needed to do well in Data scientist roles(specifically, predicitve modeling, regression modeling, time series forecasting etc)

anyways, I was accepted to an online master's in applied statistics program at Michigan Tech and I'm nervous about starting. It will require a lot of time and money so i want to make sure it will teach me will.

It's a relatively new program so i saw a few graduates on linkedin in data analyst and data scientist roles

below is a brief course description of its core classes.

Please let me know if there's anything else you would suggest me to do figure out if the program is worth the money

MA 4700 PROBABILITY AND STATISTICAL INFERENCE I — 3 Credits Introduction to probabilistic methods. Topics include probability laws, counting rules, discrete and continuous random variables, expectation, joint distributions, central limit theorem, and functions of random variables.

MA 5701 STATISTICAL METHODS — 3 Credits Introduction to design, conducting and analysis of statistical studies, with an introduction to statistical computing and preparation of statistical reports. Topics include design, descriptive, and graphical methods, probability models, parameter estimation and hypothesis testing.

MA 4705 PROBABILITY AND STATISTICAL INFERENCE II — 3 Credits Continuation of MA4700. Topics include sampling distributions, theory of point and interval estimation, properties of estimators, and theory of hypothesis testing.

MA 4720 DESIGN AND ANALYSIS OF EXPERIMENTS — 3 Credits Covers construction and analysis of completely randomized, randomized block, incomplete block, Latin squares, factorial, fractional factorial, nested, and split-plot designs. Also examines fixed, random and mixed effects models, and multiple comparisons and contrasts. The SAS statistical package is an integral part of the course.
MA 4710 REGRESSION ANALYSIS — 3 Credits Covers simple, multiple, and polynomial regression; estimation, testing, and prediction; weighted least squares, matrix approach, dummy variables, multicollinearity, model diagnostics and variable selection. A statistical computing package is an integral part of the course.
MA 5771 GENERALIZED LINEAR REGRESSION — 3 Credits Extension of linear regression to binary data, count data, multinomial data, ordinal data, and contingency tables. Generalized linear mixed models for dependent data. Application to real world problems.

MA 5761 COMPUTATIONAL STATISTICS — 3 Credits Introduction to computationally intensive statistical methods. Topics include resampling methods, Monte Carlo simulation methods, smoothing technique to estimate functions, and methods to explore data structure. This course will use the S-plus statistical software package.

MA 5781 TIME SERIES ANALYSIS AND FORECASTING — 3 Credits Statistical modeling and inference for analyzing experimental data that have been observed at different points in time. Topics include models for stationary and non-stationary time series, model specification, parametric estimation, model diagnostics and forecasting, seasonal models, and time series regression models.

MA 5751 STATISTICAL DATA MINING — 3 Credits Modern statistical data mining techniques and their applications. Topics include, but are not limited to, linear model selection and regularization, regression and smoothing splines, unsupervised learning, resampling methods, tree-based methods, and deep learning.

MA 5790 PREDICTIVE MODELING — 3 Credits Application, construction, and evaluation of statistical models used for prediction and classification. Topics include data preprocessing, over-fitting and model tuning, linear and nonlinear regression models, and linear and nonlinear classification models.
submitted by Relevant-Ad2254 to statistics [link] [comments]


2022.01.31 16:33 Kidwa96 [Q] How should I prepare myself for Masters degree in Applied Stat as someone who's bachelor background is not Statistics?

Hey guys, I did my bachelor's in business (majoring in Finance) and have always been interested in data analysis. I work in Product Management now and have been learning to code in Python for some time now. But eventually, I realized, regardless of how much coding I learn, I need to have theoretical knowl as well.
So I decided to do a master's in Applied Stats.
My stat basics is not very strong and the courses I'll be taking in my first semester are: Applied Regressions and Applied Time Series Analysis.
The question(s): what should I learn before my classes start? What are some good resources (YouTube videos/websites) which can help me learn these?
Thank you very much for your time.
Detailed Course Content if anyone's interested:
AST 502: Applied Regressions Credit-4
  1. Simple Regression Models: Review
  2. Multiple Regression Models and Estimation
Matrix Notation and Literacy Hyper plane extension to simple linear model Interaction models Basic estimation and inference for multiple regression Related Application
  1. General Linear F test and Sequential SS
Reduced and Full models F test for general linear hypotheses Effects of a variable controlled for other predictors Sequential SS Partial correlation Related Application
  1. Multicollinearity between X variables
Effect on standard deviations of coefficients Problems interpreting effects of individual variables Apparent conflicts between overall F test and individual variable t tests Benefits of designed experiments Related Application
  1. Polynomial Regression Models
  2. Categorical Predictor Variables
Dummy Variable Regression Interpretation of models containing indicatoDummy variables Piecewise regression Related Application
  1. More Diagnostic Measures and Remedial Measures for Lack of Fit
Variance Inflation Factors Ridge Regression Deleted Residuals Influence statistics -Hat matrix, Cook's D and related measures Related Application
  1. Examining All Possible Regressions
R2, MSE , Cp Stepwise algorithms Related Application
  1. Nonlinear Regression
Logistic and Poisson regression models Probit Model, Tobit Model Related Application
AST 508: Applied Time Series Analysis
  1. Introduction: Examples, simple descriptive techniques, trend, seasonality, the correlogram. White noise (WN) ,Transformation to stationarity, Stationary Time series with practical examples
  2. Probability models for time series: stationarity. Moving average (MA), Autoregressive (AR), ARMA, ARIMA, SARIMA models with applications to economics, engineering and biomedical sciences
  3. Estimating the autocorrelation function and fitting ARIMA models.
  4. Forecasting: Exponential smoothing, Forecasting from ARIMA models.
  5. Stationary multivariate models: Stationary multivariate models with application to real life data. Dynamic simultaneous equations models, Vector autoregression (VAR) models, Granger causality, Impulse response functions, Variance decompositions, Structural VAR models.
  6. Nonstationary Multivariate models: Nonstationary Multivariate models with examples. Spurious regression, Cointegration, Granger representation theorem, Vector error correction models (VECMs), Structural VAR models with cointegration, testing for cointegration, estimating the cointegrating rank, estimating cointegrating vectors.
  7. Stationary processes in the frequency domain: The spectral density function, the periodogram, spectral analysis with Empirical aspects of spectral analysis
  8. State-space models: Dynamic linear models and the Kalman filter with applications of filter
submitted by Kidwa96 to statistics [link] [comments]


2022.01.31 16:28 Kidwa96 Help out a beginner

Hey guys, I did my bachelor's in business (majoring in Finance) and have always been interested in data analysis. I work in Product Management now and have been learning to code in Python for some time now. But eventually, I realized, regardless of how much coding I learn, I need to have theoretical knowl as well.
So I decided to do a master's in Applied Stats.
My stat basics is not very strong and the courses I'll be taking in my first semester are: Applied Regressions and Applied Time Series Analysis.
The question(s): what should I learn before my classes start? What are some good resources (YouTube videos/websites) which can help me learn these?
Thank you very much for your time.
Detailed Course Content if anyone's interested:
AST 502: Applied Regressions Credit-4
  1. Simple Regression Models: Review
  2. Multiple Regression Models and Estimation
Matrix Notation and Literacy Hyper plane extension to simple linear model Interaction models Basic estimation and inference for multiple regression Related Application
  1. General Linear F test and Sequential SS
Reduced and Full models F test for general linear hypotheses Effects of a variable controlled for other predictors Sequential SS Partial correlation Related Application
  1. Multicollinearity between X variables
Effect on standard deviations of coefficients Problems interpreting effects of individual variables Apparent conflicts between overall F test and individual variable t tests Benefits of designed experiments Related Application
  1. Polynomial Regression Models
  2. Categorical Predictor Variables
Dummy Variable Regression Interpretation of models containing indicatoDummy variables Piecewise regression Related Application
  1. More Diagnostic Measures and Remedial Measures for Lack of Fit
Variance Inflation Factors Ridge Regression Deleted Residuals Influence statistics -Hat matrix, Cook's D and related measures Related Application
  1. Examining All Possible Regressions
R2, MSE , Cp Stepwise algorithms Related Application
  1. Nonlinear Regression
Logistic and Poisson regression models Probit Model, Tobit Model Related Application
AST 508: Applied Time Series Analysis
  1. Introduction: Examples, simple descriptive techniques, trend, seasonality, the correlogram. White noise (WN) ,Transformation to stationarity, Stationary Time series with practical examples
  2. Probability models for time series: stationarity. Moving average (MA), Autoregressive (AR), ARMA, ARIMA, SARIMA models with applications to economics, engineering and biomedical sciences
  3. Estimating the autocorrelation function and fitting ARIMA models.
  4. Forecasting: Exponential smoothing, Forecasting from ARIMA models.
  5. Stationary multivariate models: Stationary multivariate models with application to real life data. Dynamic simultaneous equations models, Vector autoregression (VAR) models, Granger causality, Impulse response functions, Variance decompositions, Structural VAR models.
  6. Nonstationary Multivariate models: Nonstationary Multivariate models with examples. Spurious regression, Cointegration, Granger representation theorem, Vector error correction models (VECMs), Structural VAR models with cointegration, testing for cointegration, estimating the cointegrating rank, estimating cointegrating vectors.
  7. Stationary processes in the frequency domain: The spectral density function, the periodogram, spectral analysis with Empirical aspects of spectral analysis
  8. State-space models: Dynamic linear models and the Kalman filter with applications of filter
submitted by Kidwa96 to AskStatistics [link] [comments]


2021.09.09 18:53 Desire_To_Achieve Daily Analysis #82 (AMP Technicals)

Daily Analysis #82 (AMP Technicals)
I did quite a bit of research today before making today's post. So this should be another good one. So let's hop right into it.
Daily Chart: Zooming out, and applying mathematics (Polynomial Regression) on the daily chart, we can see that we are at the bottom of the polynomial regression function. If you're interested to learn more about polynomial regression models in a simple way, you can absorb that information here. We know that AMP doesn't move in a linear fashion, which is why polynomial functions best fits AMP's charts. This mathematical function has been spot on to prediction AMP's highs and lows. Even if we apply the poly regression function ontop of my TA's we can see that the alignment is spot on. Because we have been in oversold territory for quite some time, it's only a matter of time before we see upward momentum. What goes up must come down, and what goes down, must go up. That's literally how markets works.
From today's candle, we can see that there is some bull presence (long wicks at the bottom of the candle) and selling volume is decreasing yet again. We can also see that the market is respecting the support line we spotted yesterday that looks like an ascending triangle. We need a few more days to confirm this market structure.
4H Chart: On the 4H chart, we can see that we have a lot of recent 4H candles that tested the support line that's trending upwards. The support line didn't break, which is healthy and exactly what we want to see. We don't want to go up too fast nor do we want to remain stagnant for too long. If this market structure holds, we should see some good price action from AMP by Sept 22, placing us above the $0.062 resistance level.
Conclusion: Flexa does it again. The largest financial institution in El Salvador selects Flexa to power bitcoin payments for the bank’s clients and services, including Wompi and other payment methods. " Bancoagrícola is now accepting bitcoin (BTC) across its network for payments toward loans, credit cards, and merchant goods and services in compliance with the new “Bitcoin Law” effective September 7." This partnership is active today and implemented. " ancoagrícola customers can now use any Flexa- or Lightning-enabled wallet app to pay bitcoin for US dollar–based loans and credit card payments at the exact fair market rate, without any additional fee or spread." We should see network fees start to pick up real soon! If you seen what happened with ALGO, it's likely that AMP could go parabolic too, but that's not idea for the network.
" Flexa has been instrumental in helping us prioritize our customers’ experiences using bitcoin as legal tender in El Salvador; we look forward to continuing to innovate alongside them for a long time to come.”
As a UX Researcher myself, I know first hand that technology doesn't bring mass adoption, an easy user experience does. And that's exactly what Flexa has accomplished. Now, on the ladder side of things, AMP needs to make the experience of staking more easy for the average holder. Or implement new technology like they've claimed to have, that allows anyone to stake on the Flexa network simply by holding AMP in any digital wallet. This will bring mass adoption. And if they can license that technology to other tokens (staking by holding in any wallet) the flood gates will open massively!
AMP, Flexa, don't keep us waiting too long.
Until next time, DTA is out!
[Edit: A Look At Local Bitcoin Adoption In El Salvador, Don't let the FUD fool you folks]
https://preview.redd.it/a1cjdvnhcim71.png?width=2560&format=png&auto=webp&s=67d490bf6d46674b49328fdbeec43d6fdf8c6881
submitted by Desire_To_Achieve to AMPToken [link] [comments]


2021.08.09 18:23 CptBishop Struggling to create proper .exe with outdated (?) pip module using auto py to exe

Ok, it has been like 4 weeks and I'm struggling greatly with creating an .exe file of a program I wrote. I have a simple folder structure like this https://imgur.com/a/N5kEde8 . It uses 3 pip modules - numpy, tkinter and ezdxf. It's the last one (I belive) that gives me issues. What I'm doing is using Auto py to exe to create one-directory with this setup https://imgur.com/a/UOqwO2s (nothing changed later on). Well, it does create an folder with an .exe looking like this https://imgur.com/a/FmGH0Z8. On startup of exe file it drops an module-related error https://imgur.com/a/WamnZ2a. I did try to mess with imported pip-module by deleting uuid related lines, but it just gave me another error and I'm not sure how far the rabbit hole can go. Any way to fix this issue? The script launched via VisualStudio works flawlessly.

Here is the output of Auto py to exe:
Running auto-py-to-exe v2.9.0 Building directory: C:\Users\Bishop\AppData\Local\Temp\tmprw1h9_91 Provided command: pyinstaller --noconfirm --onedir --windowed --name "MyApp" --add-data "C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib/site-packages/ezdxf;ezdxf/" --add-data "C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib/site-packages/ezdxf-0.15.2.dist-info;ezdxf-0.15.2.dist-info/" --add-data "C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib/site-packages/numpy;numpy/" --add-data "C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib/site-packages/numpy-1.20.2.dist-info;numpy-1.20.2.dist-info/" --paths "C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib/site-packages" --paths "C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib" --paths "C:/Users/Bishop/AppData/Local/Programs/Python/Python39" --paths "C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Scripts" --add-data "C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib/site-packages/tk;tk/" --add-data "C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib/site-packages/tk-0.1.0.dist-info;tk-0.1.0.dist-info/" --add-data "E:/Programowanie/PyCharmTesty/Python/KakolGaz_mini/EXP;EXP/" --add-data "E:/Programowanie/PyCharmTesty/Python/KakolGaz_mini/GUI;GUI/" --add-data "E:/Programowanie/PyCharmTesty/Python/KakolGaz_mini/IMP;IMP/" --add-data "E:/Programowanie/PyCharmTesty/Python/KakolGaz_mini/OBL;OBL/" "E:/Programowanie/PyCharmTesty/Python/KakolGaz_mini/main.py" Recursion Limit is set to 5000 Executing: pyinstaller --noconfirm --onedir --windowed --name MyApp --add-data C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib/site-packages/ezdxf;ezdxf/ --add-data C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib/site-packages/ezdxf-0.15.2.dist-info;ezdxf-0.15.2.dist-info/ --add-data C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib/site-packages/numpy;numpy/ --add-data C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib/site-packages/numpy-1.20.2.dist-info;numpy-1.20.2.dist-info/ --paths C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib/site-packages --paths C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib --paths C:/Users/Bishop/AppData/Local/Programs/Python/Python39 --paths C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Scripts --add-data C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib/site-packages/tk;tk/ --add-data C:/Users/Bishop/AppData/Local/Programs/Python/Python39/Lib/site-packages/tk-0.1.0.dist-info;tk-0.1.0.dist-info/ --add-data E:/Programowanie/PyCharmTesty/Python/KakolGaz_mini/EXP;EXP/ --add-data E:/Programowanie/PyCharmTesty/Python/KakolGaz_mini/GUI;GUI/ --add-data E:/Programowanie/PyCharmTesty/Python/KakolGaz_mini/IMP;IMP/ --add-data E:/Programowanie/PyCharmTesty/Python/KakolGaz_mini/OBL;OBL/ E:/Programowanie/PyCharmTesty/Python/KakolGaz_mini/main.py --distpath C:\Users\Bishop\AppData\Local\Temp\tmprw1h9_91\application --workpath C:\Users\Bishop\AppData\Local\Temp\tmprw1h9_91\build --specpath C:\Users\Bishop\AppData\Local\Temp\tmprw1h9_91 67166843 INFO: PyInstaller: 4.4 67166848 INFO: Python: 3.9.2 67166853 INFO: Platform: Windows-8.1-6.3.9600-SP0 67166859 INFO: wrote C:\Users\Bishop\AppData\Local\Temp\tmprw1h9_91\MyApp.spec 67166870 INFO: UPX is not available. 67166875 INFO: Extending PYTHONPATH with paths ['E:\\Programowanie\\PyCharmTesty\\Python', 'C:\\Users\\Bishop\\AppData\\Local\\Programs\\Python\\Python39\\Lib\\site-packages', 'C:\\Users\\Bishop\\AppData\\Local\\Programs\\Python\\Python39\\Lib', 'C:\\Users\\Bishop\\AppData\\Local\\Programs\\Python\\Python39', 'C:\\Users\\Bishop\\AppData\\Local\\Programs\\Python\\Python39\\Scripts', 'C:\\Users\\Bishop\\AppData\\Local\\Temp\\tmprw1h9_91'] 67166881 INFO: checking Analysis 67166889 INFO: Building Analysis because Analysis-13.toc is non existent 67166893 INFO: Reusing cached module dependency graph... 67167090 INFO: Caching module graph hooks... 67167493 INFO: running Analysis Analysis-13.toc 67167499 INFO: Adding Microsoft.Windows.Common-Controls to dependent assemblies of final executable required by c:\users\bishop\appdata\local\programs\python\python39\python.exe 67167692 INFO: Analyzing E:\Programowanie\PyCharmTesty\Python\KakolGaz_mini\main.py 67169864 INFO: Processing pre-find module path hook site from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks\\pre_find_module_path\\hook-site.py'. 67169867 INFO: site: retargeting to fake-dir 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\fake-modules' 67171936 INFO: Processing pre-safe import module hook setuptools.extern.six.moves from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module\\hook-setuptools.extern.six.moves.py'. 67174262 INFO: Processing module hooks... 67174267 INFO: Loading module hook 'hook-eel.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 67174365 INFO: Loading module hook 'hook-pycparser.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 67174371 INFO: Loading module hook 'hook-difflib.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67174379 INFO: Loading module hook 'hook-distutils.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67174384 INFO: Loading module hook 'hook-distutils.util.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67174389 INFO: Loading module hook 'hook-encodings.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67174448 INFO: Loading module hook 'hook-gevent.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67174780 WARNING: Unable to find package for requirement zope.event from package gevent. 67174784 WARNING: Unable to find package for requirement zope.interface from package gevent. 67174786 INFO: Packages required by gevent: ['setuptools', 'cffi', 'greenlet'] 67175592 INFO: Loading module hook 'hook-heapq.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67175599 INFO: Loading module hook 'hook-lib2to3.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67175630 INFO: Loading module hook 'hook-multiprocessing.util.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67175636 INFO: Loading module hook 'hook-numpy.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67175675 INFO: Import to be excluded not found: 'f2py' 67175706 INFO: Loading module hook 'hook-numpy._pytesttester.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67175715 INFO: Loading module hook 'hook-pickle.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67175721 INFO: Loading module hook 'hook-PIL.Image.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67175988 INFO: Loading module hook 'hook-PIL.ImageFilter.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67175995 INFO: Loading module hook 'hook-PIL.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67176009 INFO: Loading module hook 'hook-PIL.SpiderImagePlugin.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67176017 INFO: Loading module hook 'hook-pkg_resources.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67176237 INFO: Processing pre-safe import module hook win32com from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\pre_safe_import_module\\hook-win32com.py'. 67176593 WARNING: Hidden import "pkg_resources.py2_warn" not found! 67176599 WARNING: Hidden import "pkg_resources.markers" not found! 67176606 INFO: Loading module hook 'hook-setuptools.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67177024 INFO: Loading module hook 'hook-sysconfig.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67177030 INFO: Loading module hook 'hook-win32ctypes.core.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67177157 INFO: Loading module hook 'hook-xml.dom.domreg.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67177162 INFO: Loading module hook 'hook-xml.etree.cElementTree.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67177167 INFO: Loading module hook 'hook-xml.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67177171 INFO: Loading module hook 'hook-zope.interface.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67177178 INFO: Loading module hook 'hook-_tkinter.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67177279 INFO: checking Tree 67177284 INFO: Building Tree because Tree-39.toc is non existent 67177288 INFO: Building Tree Tree-39.toc 67177337 INFO: checking Tree 67177342 INFO: Building Tree because Tree-40.toc is non existent 67177347 INFO: Building Tree Tree-40.toc 67177410 INFO: checking Tree 67177415 INFO: Building Tree because Tree-41.toc is non existent 67177419 INFO: Building Tree Tree-41.toc 67177426 INFO: Loading module hook 'hook-pythoncom.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 67177677 INFO: Loading module hook 'hook-pywintypes.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 67177919 INFO: Loading module hook 'hook-win32com.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 67178305 INFO: Loading module hook 'hook-setuptools.msvc.py' from 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks'... 67178333 INFO: Looking for ctypes DLLs 67178390 INFO: Analyzing run-time hooks ... 67178398 INFO: Including run-time hook 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_pkgutil.py' 67178405 INFO: Including run-time hook 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_multiprocessing.py' 67178412 INFO: Including run-time hook 'c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_inspect.py' 67178423 INFO: Looking for dynamic libraries 67178690 INFO: Looking for eggs 67178696 INFO: Using Python library c:\users\bishop\appdata\local\programs\python\python39\python39.dll 67178701 INFO: Found binding redirects: [] 67178711 INFO: Warnings written to C:\Users\Bishop\AppData\Local\Temp\tmprw1h9_91\build\MyApp\warn-MyApp.txt 67178778 INFO: Graph cross-reference written to C:\Users\Bishop\AppData\Local\Temp\tmprw1h9_91\build\MyApp\xref-MyApp.html 67178793 INFO: Appending 'datas' from .spec 67178872 INFO: checking PYZ 67178878 INFO: Building PYZ because PYZ-13.toc is non existent 67178883 INFO: Building PYZ (ZlibArchive) C:\Users\Bishop\AppData\Local\Temp\tmprw1h9_91\build\MyApp\PYZ-13.pyz 67181200 INFO: Building PYZ (ZlibArchive) C:\Users\Bishop\AppData\Local\Temp\tmprw1h9_91\build\MyApp\PYZ-13.pyz completed successfully. 67181218 INFO: checking PKG 67181225 INFO: Building PKG because PKG-13.toc is non existent 67181230 INFO: Building PKG (CArchive) PKG-13.pkg 67181256 INFO: Building PKG (CArchive) PKG-13.pkg completed successfully. 67181263 INFO: Bootloader c:\users\bishop\appdata\local\programs\python\python39\lib\site-packages\PyInstaller\bootloader\Windows-64bit\runw.exe 67181268 INFO: checking EXE 67181273 INFO: Building EXE because EXE-13.toc is non existent 67181278 INFO: Building EXE from EXE-13.toc 67181284 INFO: Copying icons from ['c:\\users\\bishop\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\PyInstaller\\bootloader\\images\\icon-windowed.ico'] 67181290 INFO: Writing RT_GROUP_ICON 0 resource with 104 bytes 67181296 INFO: Writing RT_ICON 1 resource with 3752 bytes 67181301 INFO: Writing RT_ICON 2 resource with 2216 bytes 67181307 INFO: Writing RT_ICON 3 resource with 1384 bytes 67181312 INFO: Writing RT_ICON 4 resource with 38188 bytes 67181316 INFO: Writing RT_ICON 5 resource with 9640 bytes 67181322 INFO: Writing RT_ICON 6 resource with 4264 bytes 67181326 INFO: Writing RT_ICON 7 resource with 1128 bytes 67181333 INFO: Appending archive to EXE C:\Users\Bishop\AppData\Local\Temp\tmprw1h9_91\build\MyApp\MyApp.exe 67181828 INFO: Building EXE from EXE-13.toc completed successfully. 67181839 INFO: checking COLLECT 67181844 INFO: Building COLLECT because COLLECT-13.toc is non existent 67181849 INFO: Building COLLECT COLLECT-13.toc 67183589 INFO: Building COLLECT COLLECT-13.toc completed successfully. Moving project to: C:\Users\Bishop\AppData\Local\Programs\Python\Python39\Scripts\output Complete. 
And here is the "warnings" txt created
This file lists modules PyInstaller was not able to find. This does not necessarily mean this module is required for running you program. Python and Python 3rd-party packages include a lot of conditional or optional modules. For example the module 'ntpath' only exists on Windows, whereas the module 'posixpath' only exists on Posix systems. Types if import: * top-level: imported at the top-level - look at these first * conditional: imported within an if-statement * delayed: imported from within a function * optional: imported within a try-except-statement IMPORTANT: Do NOT post this list to the issue-tracker. Use it as a basis for yourself tracking down the missing module. Thanks! missing module named pep517 - imported by importlib.metadata (delayed) missing module named pwd - imported by posixpath (delayed, conditional), shutil (optional), tarfile (optional), pathlib (delayed, conditional, optional), subprocess (optional), http.server (delayed, optional), webbrowser (delayed), netrc (delayed, conditional), getpass (delayed), distutils.util (delayed, conditional, optional), distutils.archive_util (optional), gevent.subprocess (optional) missing module named org - imported by copy (optional) missing module named posix - imported by os (conditional, optional), shutil (conditional) missing module named resource - imported by posix (top-level), test.support (delayed, conditional, optional) missing module named grp - imported by shutil (optional), tarfile (optional), pathlib (delayed, optional), subprocess (optional), distutils.archive_util (optional), gevent.subprocess (optional) missing module named urllib.pathname2url - imported by urllib (conditional), PyInstaller.lib.modulegraph._compat (conditional) missing module named urllib.unquote - imported by urllib (conditional), bottle (conditional), gevent.pywsgi (optional) missing module named urllib.quote - imported by urllib (conditional), bottle (conditional) missing module named urllib.urlencode - imported by urllib (conditional), bottle (conditional) missing module named _posixsubprocess - imported by subprocess (optional), multiprocessing.util (delayed) missing module named _posixshmem - imported by multiprocessing.resource_tracker (conditional), multiprocessing.shared_memory (conditional) missing module named multiprocessing.set_start_method - imported by multiprocessing (top-level), multiprocessing.spawn (top-level) missing module named multiprocessing.get_start_method - imported by multiprocessing (top-level), multiprocessing.spawn (top-level) missing module named multiprocessing.get_context - imported by multiprocessing (top-level), multiprocessing.pool (top-level), multiprocessing.managers (top-level), multiprocessing.sharedctypes (top-level) missing module named multiprocessing.TimeoutError - imported by multiprocessing (top-level), multiprocessing.pool (top-level) missing module named _scproxy - imported by urllib.request (conditional) missing module named termios - imported by tty (top-level), getpass (optional) missing module named 'java.lang' - imported by platform (delayed, optional), xml.sax._exceptions (conditional) missing module named multiprocessing.BufferTooShort - imported by multiprocessing (top-level), multiprocessing.connection (top-level) missing module named multiprocessing.AuthenticationError - imported by multiprocessing (top-level), multiprocessing.connection (top-level) missing module named multiprocessing.Process - imported by multiprocessing (top-level), gevent.tests.test__issue600 (top-level) missing module named multiprocessing.cpu_count - imported by multiprocessing (top-level), gevent.testing.testrunner (top-level) missing module named asyncio.DefaultEventLoopPolicy - imported by asyncio (delayed, conditional), asyncio.events (delayed, conditional) missing module named vms_lib - imported by platform (delayed, optional) missing module named java - imported by platform (delayed) missing module named _winreg - imported by platform (delayed, optional), pkg_resources._vendor.appdirs (delayed, conditional) missing module named readline - imported by cmd (delayed, conditional, optional), code (delayed, conditional, optional), pdb (delayed, optional) missing module named 'org.python' - imported by pickle (optional), xml.sax (delayed, conditional), setuptools.sandbox (conditional) missing module named _frozen_importlib_external - imported by importlib._bootstrap (delayed), importlib (optional), importlib.abc (optional), zipimport (top-level) excluded module named _frozen_importlib - imported by importlib (optional), importlib.abc (optional), zipimport (top-level), PyInstaller.loader.pyimod02_archive (delayed) missing module named pyimod03_importers - imported by C:\Users\Bishop\AppData\Local\Programs\Python\Python39\Lib\site-packages\PyInstaller\hooks\rthooks\pyi_rth_pkgutil.py (top-level) missing module named GUI - imported by E:\Programowanie\PyCharmTesty\Python\KakolGaz_mini\main.py (top-level) missing module named 'nose.plugins' - imported by numpy.testing._private.noseclasses (top-level), numpy.testing._private.nosetester (delayed) missing module named 'nose.util' - imported by numpy.testing._private.noseclasses (top-level) missing module named psutil - imported by gevent._compat (delayed, optional), gevent.testing.openfiles (optional), numpy.testing._private.utils (delayed, optional), gevent.tests.test__makefile_ref (optional) missing module named _dummy_thread - imported by cffi.lock (conditional, optional), numpy.core.arrayprint (optional) missing module named numpy.core.result_type - imported by numpy.core (delayed), numpy.testing._private.utils (delayed) missing module named numpy.core.float_ - imported by numpy.core (delayed), numpy.testing._private.utils (delayed) missing module named numpy.core.number - imported by numpy.core (delayed), numpy.testing._private.utils (delayed) missing module named numpy.core.object_ - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (delayed) missing module named numpy.core.all - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (delayed) missing module named numpy.core.bool_ - imported by numpy.core (delayed), numpy.testing._private.utils (delayed) missing module named numpy.core.inf - imported by numpy.core (delayed), numpy.testing._private.utils (delayed) missing module named numpy.core.array2string - imported by numpy.core (delayed), numpy.testing._private.utils (delayed) missing module named numpy.core.signbit - imported by numpy.core (delayed), numpy.testing._private.utils (delayed) missing module named numpy.core.isscalar - imported by numpy.core (delayed), numpy.testing._private.utils (delayed), numpy.lib.polynomial (top-level) missing module named numpy.core.isinf - imported by numpy.core (delayed), numpy.testing._private.utils (delayed) missing module named numpy.core.errstate - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (delayed) missing module named numpy.core.isfinite - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (delayed) missing module named numpy.core.isnan - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (delayed) missing module named numpy.core.array - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (top-level), numpy.lib.polynomial (top-level) missing module named numpy.core.isnat - imported by numpy.core (top-level), numpy.testing._private.utils (top-level) missing module named numpy.core.ndarray - imported by numpy.core (top-level), numpy.testing._private.utils (top-level), numpy.lib.utils (top-level) missing module named numpy.core.array_repr - imported by numpy.core (top-level), numpy.testing._private.utils (top-level) missing module named numpy.core.arange - imported by numpy.core (top-level), numpy.testing._private.utils (top-level), numpy.fft.helper (top-level) missing module named numpy.core.empty - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (top-level), numpy.fft.helper (top-level) missing module named numpy.core.float32 - imported by numpy.core (top-level), numpy.testing._private.utils (top-level) missing module named numpy.core.intp - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.testing._private.utils (top-level) missing module named numpy.core.linspace - imported by numpy.core (top-level), numpy.lib.index_tricks (top-level) missing module named numpy.core.iinfo - imported by numpy.core (top-level), numpy.lib.twodim_base (top-level) missing module named numpy.core.transpose - imported by numpy.core (top-level), numpy.lib.function_base (top-level) missing module named numpy.core.asarray - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.lib.utils (top-level), numpy.fft._pocketfft (top-level), numpy.fft.helper (top-level) missing module named numpy.core.integer - imported by numpy.core (top-level), numpy.fft.helper (top-level) missing module named numpy.core.sqrt - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.fft._pocketfft (top-level) missing module named numpy.core.conjugate - imported by numpy.core (top-level), numpy.fft._pocketfft (top-level) missing module named numpy.core.swapaxes - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.fft._pocketfft (top-level) missing module named numpy.core.zeros - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.fft._pocketfft (top-level) missing module named numpy.core.sort - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.argsort - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.sign - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.count_nonzero - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.divide - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.matmul - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.asanyarray - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.atleast_2d - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.product - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.amax - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.amin - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.moveaxis - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.geterrobj - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.finfo - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.lib.polynomial (top-level) missing module named numpy.core.sum - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.fastCopyAndTranspose - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.multiply - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.add - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.dot - imported by numpy.core (top-level), numpy.linalg.linalg (top-level), numpy.lib.polynomial (top-level) missing module named numpy.core.Inf - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.newaxis - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.complexfloating - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.inexact - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.cdouble - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.csingle - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.double - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.single - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.intc - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.empty_like - imported by numpy.core (top-level), numpy.linalg.linalg (top-level) missing module named numpy.core.ufunc - imported by numpy.core (top-level), numpy.lib.utils (top-level) missing module named numpy.core.ones - imported by numpy.core (top-level), numpy.lib.polynomial (top-level) missing module named numpy.core.hstack - imported by numpy.core (top-level), numpy.lib.polynomial (top-level) missing module named numpy.core.atleast_1d - imported by numpy.core (top-level), numpy.lib.polynomial (top-level) missing module named numpy.core.atleast_3d - imported by numpy.core (top-level), numpy.lib.shape_base (top-level) missing module named numpy.core.vstack - imported by numpy.core (top-level), numpy.lib.shape_base (top-level) missing module named pickle5 - imported by numpy.compat.py3k (optional) missing module named numpy.eye - imported by numpy (delayed), numpy.core.numeric (delayed) missing module named numpy.recarray - imported by numpy (top-level), numpy.ma.mrecords (top-level) missing module named numpy.dtype - imported by numpy (top-level), numpy.ma.mrecords (top-level), numpy.ctypeslib (top-level) missing module named numpy.expand_dims - imported by numpy (top-level), numpy.ma.core (top-level) missing module named numpy.array - imported by numpy (top-level), numpy.ma.core (top-level), numpy.ma.extras (top-level), numpy.ma.mrecords (top-level), numpy.ctypeslib (top-level) missing module named numpy.bool_ - imported by numpy (top-level), numpy.ma.core (top-level), numpy.ma.mrecords (top-level) missing module named numpy.iscomplexobj - imported by numpy (top-level), numpy.ma.core (top-level) missing module named numpy.amin - imported by numpy (top-level), numpy.ma.core (top-level) missing module named numpy.amax - imported by numpy (top-level), numpy.ma.core (top-level) missing module named numpy.ndarray - imported by numpy (top-level), numpy.ma.core (top-level), numpy.ma.extras (top-level), numpy.ma.mrecords (top-level), numpy.ctypeslib (top-level) missing module named numpy.histogramdd - imported by numpy (delayed), numpy.lib.twodim_base (delayed) 


submitted by CptBishop to learnpython [link] [comments]


2021.02.10 18:01 Jimbobmij Errors trying to pip install pandas_profiling, any ideas? (Long wall of text error message in post)

Collecting pandas_profiling Using cached pandas_profiling-2.10.1-py2.py3-none-any.whl (240 kB) Requirement already satisfied: seaborn>=0.10.1 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from pandas_profiling) (0.11.1) Requirement already satisfied: tangled-up-in-unicode>=0.0.6 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from pandas_profiling) (0.0.6) Requirement already satisfied: ipywidgets>=7.5.1 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from pandas_profiling) (7.6.3) Requirement already satisfied: pandas!=1.0.0,!=1.0.1,!=1.0.2,!=1.1.0,>=0.25.3 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from pandas_profiling) (1.2.1) Requirement already satisfied: scipy>=1.4.1 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from pandas_profiling) (1.6.0) Requirement already satisfied: matplotlib>=3.2.0 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from pandas_profiling) (3.3.3) Requirement already satisfied: attrs>=19.3.0 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from pandas_profiling) (20.3.0) Requirement already satisfied: numpy>=1.16.0 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from pandas_profiling) (1.19.5) Requirement already satisfied: jinja2>=2.11.1 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from pandas_profiling) (2.11.2)Note: you may need to restart the kernel to use updated packages. Collecting visions[type_image_path]==0.6.0 Using cached visions-0.6.0-py3-none-any.whl (75 kB) Requirement already satisfied: networkx>=2.4 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from visions[type_image_path]==0.6.0->pandas_profiling) (2.5) Requirement already satisfied: Pillow in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from visions[type_image_path]==0.6.0->pandas_profiling) (8.1.0) Collecting confuse>=1.0.0 Using cached confuse-1.4.0-py2.py3-none-any.whl (21 kB) Collecting htmlmin>=0.1.12 Using cached htmlmin-0.1.12-py3-none-any.whl Requirement already satisfied: widgetsnbextension~=3.5.0 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipywidgets>=7.5.1->pandas_profiling) (3.5.1) Requirement already satisfied: nbformat>=4.2.0 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipywidgets>=7.5.1->pandas_profiling) (5.1.2) Requirement already satisfied: traitlets>=4.3.1 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipywidgets>=7.5.1->pandas_profiling) (5.0.5) Requirement already satisfied: ipykernel>=4.5.1 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipywidgets>=7.5.1->pandas_profiling) (5.3.4) Requirement already satisfied: jupyterlab-widgets>=1.0.0 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipywidgets>=7.5.1->pandas_profiling) (1.0.0) Requirement already satisfied: ipython>=4.0.0 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipywidgets>=7.5.1->pandas_profiling) (7.19.0) Requirement already satisfied: jupyter-client in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipykernel>=4.5.1->ipywidgets>=7.5.1->pandas_profiling) (6.1.7) Requirement already satisfied: tornado>=4.2 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipykernel>=4.5.1->ipywidgets>=7.5.1->pandas_profiling) (6.1) Requirement already satisfied: jedi>=0.10 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipython>=4.0.0->ipywidgets>=7.5.1->pandas_profiling) (0.18.0) Requirement already satisfied: pygments in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipython>=4.0.0->ipywidgets>=7.5.1->pandas_profiling) (2.7.4) Requirement already satisfied: colorama in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipython>=4.0.0->ipywidgets>=7.5.1->pandas_profiling) (0.4.4) Requirement already satisfied: backcall in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipython>=4.0.0->ipywidgets>=7.5.1->pandas_profiling) (0.2.0) Requirement already satisfied: decorator in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipython>=4.0.0->ipywidgets>=7.5.1->pandas_profiling) (4.4.2) Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipython>=4.0.0->ipywidgets>=7.5.1->pandas_profiling) (3.0.8) Requirement already satisfied: setuptools>=18.5 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipython>=4.0.0->ipywidgets>=7.5.1->pandas_profiling) (51.1.2.post20210112) Requirement already satisfied: pickleshare in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from ipython>=4.0.0->ipywidgets>=7.5.1->pandas_profiling) (0.7.5) Requirement already satisfied: parso<0.9.0,>=0.8.0 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from jedi>=0.10->ipython>=4.0.0->ipywidgets>=7.5.1->pandas_profiling) (0.8.1) Requirement already satisfied: MarkupSafe>=0.23 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from jinja2>=2.11.1->pandas_profiling) (1.1.1) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from matplotlib>=3.2.0->pandas_profiling) (2.4.7) Requirement already satisfied: cycler>=0.10 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from matplotlib>=3.2.0->pandas_profiling) (0.10.0) Requirement already satisfied: python-dateutil>=2.1 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from matplotlib>=3.2.0->pandas_profiling) (2.8.1) Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from matplotlib>=3.2.0->pandas_profiling) (1.3.1) Requirement already satisfied: six in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from cycler>=0.10->matplotlib>=3.2.0->pandas_profiling) (1.15.0) Collecting missingno>=0.4.2 Using cached missingno-0.4.2-py3-none-any.whl (9.7 kB) Requirement already satisfied: jupyter-core in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from nbformat>=4.2.0->ipywidgets>=7.5.1->pandas_profiling) (4.7.0) Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from nbformat>=4.2.0->ipywidgets>=7.5.1->pandas_profiling) (3.2.0) Requirement already satisfied: ipython-genutils in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from nbformat>=4.2.0->ipywidgets>=7.5.1->pandas_profiling) (0.2.0) Requirement already satisfied: pyrsistent>=0.14.0 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets>=7.5.1->pandas_profiling) (0.17.3) Requirement already satisfied: pytz>=2017.3 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from pandas!=1.0.0,!=1.0.1,!=1.0.2,!=1.1.0,>=0.25.3->pandas_profiling) (2020.5) Collecting phik>=0.10.0 Using cached phik-0.11.0-py3-none-any.whl Collecting joblib Using cached joblib-1.0.1-py3-none-any.whl (303 kB) Collecting numba>=0.38.1 Using cached numba-0.51.2.tar.gz (2.1 MB)
Collecting llvmlite<0.35,>=0.34.0.dev0 Using cached llvmlite-0.34.0.tar.gz (107 kB) Requirement already satisfied: wcwidth in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython>=4.0.0->ipywidgets>=7.5.1->pandasprofiling) (0.2.5) ERROR: Command errored out with exit status 1: command: 'C:\Users\Admin\miniconda3\envs\LiveCode1\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\Admin\AppData\Local\Temp\pip-install-_wdslkt1\numba_b0179c5e5d454492a00e913b82ecee6c\setup.py'"'"'; __file='"'"'C:\Users\Admin\AppData\Local\Temp\pip-install-_wdslkt1\numba_b0179c5e5d454492a00e913b82ecee6c\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\Admin\AppData\Local\Temp\pip-wheel-rir0souj' cwd: C:\Users\Admin\AppData\Local\Temp\pip-install-_wdslkt1\numba_b0179c5e5d454492a00e913b82ecee6c\ Complete output (761 lines): TBB not found Using OpenMP from: True running bdist_wheel running build got version from file C:\Users\Admin\AppData\Local\Temp\pip-install-_wdslkt1\numba_b0179c5e5d454492a00e913b82ecee6c\numba/_version.py {'version': '0.51.2', 'full': '9d570961590c09a1eba748c9c37e91d1224fc9ad'} running build_py creating build creating build\lib.win-amd64-3.9 creating build\lib.win-amd64-3.9\numba copying numba\extending.py -> build\lib.win-amd64-3.9\numba copying numba\runtests.py -> build\lib.win-amd64-3.9\numba copying numba_version.py -> build\lib.win-amd64-3.9\numba copying numba\init.py -> build\lib.win-amd64-3.9\numba copying numba\main.py -> build\lib.win-amd64-3.9\numba creating build\lib.win-amd64-3.9\numba\cext copying numba\cext\init.py -> build\lib.win-amd64-3.9\numba\cext creating build\lib.win-amd64-3.9\numba\core copying numba\core\analysis.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\base.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\boxing.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\bytecode.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\byteflow.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\caching.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\callconv.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\callwrapper.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\ccallback.py -> build\lib.win-amd64-3.9\numba\core Collecting requests>=2.24.0 Using cached requests-2.25.1-py2.py3-none-any.whl (61 kB) Requirement already satisfied: certifi>=2017.4.17 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from requests>=2.24.0->pandas_profiling) (2020.12.5) Collecting chardet<5,>=3.0.2 Using cached chardet-4.0.0-py2.py3-none-any.whl (178 kB) Collecting idna<3,>=2.5 Using cached idna-2.10-py2.py3-none-any.whl (58 kB) Collecting tqdm>=4.48.2 Using cached tqdm-4.56.1-py2.py3-none-any.whl (72 kB) Collecting urllib3<1.27,>=1.21.1 Using cached urllib3-1.26.3-py2.py3-none-any.whl (137 kB) Requirement already satisfied: notebook>=4.4.1 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (6.1.6) Requirement already satisfied: nbconvert in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (6.0.7) Requirement already satisfied: argon2-cffi in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (20.1.0) Requirement already satisfied: terminado>=0.8.3 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (0.9.2) Requirement already satisfied: prometheus-client in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (0.9.0) Requirement already satisfied: Send2Trash in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (1.5.0) Requirement already satisfied: pyzmq>=17 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (20.0.0) Requirement already satisfied: pywin32>=1.0 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from jupyter-core->nbformat>=4.2.0->ipywidgets>=7.5.1->pandas_profiling) (228) Requirement already satisfied: pywinpty>=0.5 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from terminado>=0.8.3->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (0.5.7) Requirement already satisfied: cffi>=1.0.0 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from argon2-cffi->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (1.14.4) Requirement already satisfied: pycparser in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from cffi>=1.0.0->argon2-cffi->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (2.20) Collecting imagehash Using cached ImageHash-4.2.0-py2.py3-none-any.whl (295 kB) Requirement already satisfied: PyWavelets in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from imagehash->visions[type_image_path]==0.6.0->pandas_profiling) (1.1.1) Requirement already satisfied: nbclient<0.6.0,>=0.5.0 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (0.5.1) Requirement already satisfied: entrypoints>=0.2.2 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (0.3) Requirement already satisfied: jupyterlab-pygments in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (0.1.2) Requirement already satisfied: defusedxml in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (0.6.0) Requirement already satisfied: bleach in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (3.2.1) Requirement already satisfied: testpath in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (0.4.4) Requirement already satisfied: mistune<2,>=0.8.1 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (0.8.4) Requirement already satisfied: pandocfilters>=1.4.1 in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (1.4.3) Requirement already satisfied: async-generator in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from nbclient<0.6.0,>=0.5.0->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (1.10) Requirement already satisfied: nest-asyncio in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from nbclient<0.6.0,>=0.5.0->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (1.4.3) Requirement already satisfied: webencodings in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from bleach->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (0.5.1) Requirement already satisfied: packaging in c:\users\admin\miniconda3\envs\livecode1\lib\site-packages (from bleach->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas_profiling) (20.8) Collecting pyyaml Using cached PyYAML-5.4.1-cp39-cp39-win_amd64.whl (213 kB) Building wheels for collected packages: numba, llvmlite Building wheel for numba (setup.py): started Building wheel for numba (setup.py): finished with status 'error' Running setup.py clean for numba Building wheel for llvmlite (setup.py): started Building wheel for llvmlite (setup.py): finished with status 'error' Running setup.py clean for llvmlite Failed to build numba llvmlite Installing collected packages: llvmlite, visions, urllib3, pyyaml, numba, joblib, imagehash, idna, chardet, tqdm, requests, phik, missingno, htmlmin, confuse, pandas-profiling Running setup.py install for llvmlite: started Running setup.py install for llvmlite: finished with status 'error' copying numba\core\cgutils.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\codegen.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\compiler.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\compiler_lock.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\compiler_machinery.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\config.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\consts.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\controlflow.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\cpu.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\cpu_options.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\dataflow.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\debuginfo.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\decorators.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\descriptors.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\dispatcher.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\entrypoints.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\environment.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\errors.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\extending.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\externals.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\fastmathpass.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\funcdesc.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\generators.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\imputils.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\inline_closurecall.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\interpreter.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\intrinsics.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\ir.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\ir_utils.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\itanium_mangler.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\lowering.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\object_mode_passes.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\optional.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\options.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\postproc.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\pylowering.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\pythonapi.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\registry.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\removerefctpass.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\serialize.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\sigutils.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\ssa.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\tracing.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\transforms.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\typed_passes.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\typeinfer.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\untyped_passes.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\utils.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\withcontexts.py -> build\lib.win-amd64-3.9\numba\core copying numba\core\init.py -> build\lib.win-amd64-3.9\numba\core creating build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\builtins.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\charseq.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\cmathimpl.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\enumimpl.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\hashing.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\heapq.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\iterators.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\listobj.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\mathimpl.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\numbers.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\printimpl.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\randomimpl.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\rangeobj.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\setobj.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\slicing.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\tupleobj.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\unicode.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\unicode_support.py -> build\lib.win-amd64-3.9\numba\cpython copying numba\cpython\init.py -> build\lib.win-amd64-3.9\numba\cpython creating build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\api.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\args.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\codegen.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\compiler.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\cudadecl.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\cudaimpl.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\cudamath.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\cuda_paths.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\decorators.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\descriptor.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\device_init.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\dispatcher.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\envvars.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\errors.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\initialize.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\intrinsic_wrapper.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\libdevice.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\models.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\nvvmutils.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\printimpl.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\random.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\simulator_init.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\stubs.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\target.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\testing.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\types.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\vectorizers.py -> build\lib.win-amd64-3.9\numba\cuda copying numba\cuda\init.py -> build\lib.win-amd64-3.9\numba\cuda creating build\lib.win-amd64-3.9\numba\experimental copying numba\experimental\function_type.py -> build\lib.win-amd64-3.9\numba\experimental copying numba\experimental\structref.py -> build\lib.win-amd64-3.9\numba\experimental copying numba\experimental\init.py -> build\lib.win-amd64-3.9\numba\experimental creating build\lib.win-amd64-3.9\numba\misc copying numba\misc\appdirs.py -> build\lib.win-amd64-3.9\numba\misc copying numba\misc\cffiimpl.py -> build\lib.win-amd64-3.9\numba\misc copying numba\misc\dummyarray.py -> build\lib.win-amd64-3.9\numba\misc copying numba\misc\dump_style.py -> build\lib.win-amd64-3.9\numba\misc copying numba\misc\findlib.py -> build\lib.win-amd64-3.9\numba\misc copying numba\misc\gdb_hook.py -> build\lib.win-amd64-3.9\numba\misc copying numba\misc\inspection.py -> build\lib.win-amd64-3.9\numba\misc copying numba\misc\literal.py -> build\lib.win-amd64-3.9\numba\misc copying numba\misc\mergesort.py -> build\lib.win-amd64-3.9\numba\misc copying numba\misc\numba_entry.py -> build\lib.win-amd64-3.9\numba\misc copying numba\misc\numba_sysinfo.py -> build\lib.win-amd64-3.9\numba\misc copying numba\misc\quicksort.py -> build\lib.win-amd64-3.9\numba\misc copying numba\misc\special.py -> build\lib.win-amd64-3.9\numba\misc copying numba\misc\timsort.py -> build\lib.win-amd64-3.9\numba\misc copying numba\misc\init.py -> build\lib.win-amd64-3.9\numba\misc creating build\lib.win-amd64-3.9\numba\np copying numba\np\arraymath.py -> build\lib.win-amd64-3.9\numba\np copying numba\np\arrayobj.py -> build\lib.win-amd64-3.9\numba\np copying numba\np\extensions.py -> build\lib.win-amd64-3.9\numba\np copying numba\np\linalg.py -> build\lib.win-amd64-3.9\numba\np copying numba\np\npdatetime.py -> build\lib.win-amd64-3.9\numba\np copying numba\np\npdatetime_helpers.py -> build\lib.win-amd64-3.9\numba\np copying numba\np\npyfuncs.py -> build\lib.win-amd64-3.9\numba\np copying numba\np\npyimpl.py -> build\lib.win-amd64-3.9\numba\np copying numba\np\numpy_support.py -> build\lib.win-amd64-3.9\numba\np copying numba\np\polynomial.py -> build\lib.win-amd64-3.9\numba\np copying numba\np\ufunc_db.py -> build\lib.win-amd64-3.9\numba\np copying numba\np\init.py -> build\lib.win-amd64-3.9\numba\np creating build\lib.win-amd64-3.9\numba\parfors copying numba\parfors\array_analysis.py -> build\lib.win-amd64-3.9\numba\parfors copying numba\parfors\parfor.py -> build\lib.win-amd64-3.9\numba\parfors copying numba\parfors\parfor_lowering.py -> build\lib.win-amd64-3.9\numba\parfors copying numba\parfors\parfor_lowering_utils.py -> build\lib.win-amd64-3.9\numba\parfors copying numba\parfors\init.py -> build\lib.win-amd64-3.9\numba\parfors creating build\lib.win-amd64-3.9\numba\pycc copying numba\pycc\cc.py -> build\lib.win-amd64-3.9\numba\pycc copying numba\pycc\compiler.py -> build\lib.win-amd64-3.9\numba\pycc copying numba\pycc\decorators.py -> build\lib.win-amd64-3.9\numba\pycc copying numba\pycc\llvm_types.py -> build\lib.win-amd64-3.9\numba\pycc copying numba\pycc\platform.py -> build\lib.win-amd64-3.9\numba\pycc copying numba\pycc\init.py -> build\lib.win-amd64-3.9\numba\pycc creating build\lib.win-amd64-3.9\numba\roc copying numba\roc\api.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\codegen.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\compiler.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\decorators.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\descriptor.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\dispatch.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\enums.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\gcn_occupancy.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\hsadecl.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\hsaimpl.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\initialize.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\mathdecl.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\mathimpl.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\stubs.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\target.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\vectorizers.py -> build\lib.win-amd64-3.9\numba\roc copying numba\roc\init.py -> build\lib.win-amd64-3.9\numba\roc creating build\lib.win-amd64-3.9\numba\scripts copying numba\scripts\generate_lower_listing.py -> build\lib.win-amd64-3.9\numba\scripts copying numba\scripts\init.py -> build\lib.win-amd64-3.9\numba\scripts creating build\lib.win-amd64-3.9\numba\stencils copying numba\stencils\stencil.py -> build\lib.win-amd64-3.9\numba\stencils copying numba\stencils\stencilparfor.py -> build\lib.win-amd64-3.9\numba\stencils copying numba\stencils\init.py -> build\lib.win-amd64-3.9\numba\stencils creating build\lib.win-amd64-3.9\numba\testing copying numba\testing\loader.py -> build\lib.win-amd64-3.9\numba\testing copying numba\testing\main.py -> build\lib.win-amd64-3.9\numba\testing copying numba\testing\notebook.py -> build\lib.win-amd64-3.9\numba\testing copying numba\testing_runtests.py -> build\lib.win-amd64-3.9\numba\testing copying numba\testing\init.py -> build\lib.win-amd64-3.9\numba\testing copying numba\testing\main_.py -> build\lib.win-amd64-3.9\numba\testing creating build\lib.win-amd64-3.9\numba\tests copying numba\tests\annotation_usecases.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\cache_usecases.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\cffi_usecases.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\cfunc_cache_usecases.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\compile_with_pycc.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\complex_usecases.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\ctypes_usecases.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\dummy_module.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\enum_usecases.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\error_usecases.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\inlining_usecases.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\matmul_usecase.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\orphaned_semaphore_usecase.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\overload_usecases.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\parfors_cache_usecases.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\pdlike_usecase.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\recursion_usecases.py -> build\lib.win-amd64-3.9\numba\tests copying numba\tests\serialize_usecases.py -> build\lib.win-amd64-3.9\numba\tests
submitted by Jimbobmij to learnpython [link] [comments]


2020.11.29 17:24 thomasbbbb [code] Klibanov algorithm for one option and 10mn laps

Here is the implementation in python of the algorithm in this article:
#! /usbin/python #---------- # This unusual and intriguing algorithm was originally invented # by Michael V. Klibanov, Professor, Department of Mathematics and Statistics, # University of North Carolina at Charlotte. It is published in the following # paper: # M.V. Klibanov, A.V. Kuzhuget and K.V. Golubnichiy, # "An ill-posed problem for the Black-Scholes equation # for a profitable forecast of prices of stock options on real market data", # Inverse Problems, 32 (2016) 015010. #---------- # Script assumes it's called by crontab, at the opening of the market #----- import numpy as np import pause, datetime from bs4 import BeautifulSoup import requests # Quadratic interpolation of the bid and ask option prices, and linear interpolation in between (https://people.math.sc.edu/kellerlv/Quadratic_Interpolation.pdf) def funcQuadraticInterpolationCoef(values): # There is 'scipy.interpolate.interp1d' too y = np.array(values) A = np.array([[1,0,0],[1,-1,1],[1,-2,4]]) return np.linalg.solve(A,y) # https://en.wikipedia.org/wiki/Polynomial_regression def funcUab(t,coef): return coef[2]*t**2 + coef[1]*t + coef[0] def funcF(s, sa, sb, ua, ub): return (s-sb)*(ua-ub)/(sa-sb) + ub # Initialize the volatility and option lists of 3 values optionBid = [0] # dummy value to pop in the loop optionAsk = [0] # dummy value to pop in the loop volatility = [0] # dummy value to pop in the loop # Initalization for the loop Nt = 4 # even number greater than 2: 4, 6, ... Ns = 2 # even number greater than 0: 2, 4, ... twotau = 2 # not a parameter... alpha = 0.01 # not a parameter... dt = twotau / Nt # time grid step dimA = ( (Nt+1)*(Ns+1), (Nt+1)*(Ns+1) ) # Matrix A dimensions dimb = ( (Nt+1)*(Ns+1), 1 ) # Vector b dimensions A = np.zeros( dimA ) # Matrix A b = np.zeros( dimb ) # Vector b portfolio = 1000000 # Money 'available' securityMargin = 0.00083 # EMPIRICAL: needs to be adjusted when taking into account the transaction fees (should rise, see the article p.8) # Wait 10mn after the opening of the market datet = datetime.datetime.now() datet = datetime.datetime(datet.year, datet.month, datet.day, datet.hour, datet.minute + 10) pause.until(datet) # Record the stock and option values and wait 10mn more def funcRetrieveStockOptionVolatility(): # Stock stock_data_url = "https://finance.yahoo.com/quote/MSFT?p=MSFT" stock_data_html = requests.get(data_url).content stock_content = BeautifulSoup(stock_data_html, "html.parser") stock_bid = content.find("td", {'class': 'Ta(end) Fw(600) Lh(14px)', 'data-test': "BID-value"}) print(stock_bid) stock_ask = content.find("td", {'class': 'Ta(end) Fw(600) Lh(14px)', 'data-test': "ASK-value"}) print(stock_ask) stockOptVol[0] = stock_bid.text.split()[0] stockOptVol[1] = stock_ask.text.split()[0] # Option option_data_url = "https://finance.yahoo.com/quote/MSFT/options?p=MSFT&date=1631836800" option_data_html = requests.get(option_data_url).content option_content = BeautifulSoup(option_data_html, "html.parser") call_option_table = content.find("table", {'class': 'calls W(100%) Pos(r) Bd(0) Pt(0) list-options'}) calls = call_option_table.find_all("tr")[1:] it = 0 for call_option in calls: it+=1 print("it = ", it) if "in-the-money " in str(call_option): itm_calls.append(call_option) print("in the money") itm_put_data = [] for td in BeautifulSoup(str(itm_calls[-1]), "html.parser").find_all("td"): itm_put_data.append(td.text) print(itm_put_data) if itm_put_data[0] == 'MSFT210917C00220000': # One single option stockOptVol[2] = float(itm_put_data[4]) stockOptVol[3] = float(itm_put_data[5]) stockOptVol[4] = float(itm_put_data[-1].strip('%')) else: otm_calls.append(call_option) print("out the money") print("bid = ", option_bid, "\nask = ", option_ask, "\nvol = ",option_vol) return stockOptVol # Record option and volatility stockOptVol = funcRetrieveStockOptionVolatility() optionBid.append(stockOptVol[2]) optionAsk.append(stockOptVol[3]) optionVol.append(stockOptVol[4]) # Wait another 10mn to record a second value for the quadratic interpolation datet = datetime.datetime.now() datet = datetime.datetime(datet.year, datet.month, datet.day, datet.hour, datet.minute + 10) pause.until(datet) stockOptVol = funcRetrieveStockOptionVolatility() optionBid.append(stockOptVol[2]) optionAsk.append(stockOptVol[3]) optionVol.append(stockOptVol[4]) tradeAtTimeTau = False tradeAtTimeTwoTau = False # Run the loop until 30mn before closure datet = datetime.datetime.now() datetend = datetime.datetime(datet.year, datet.month, datet.day, datet.hour + 6, datet.minute + 10) while datet <= datetend: datet = datetime.datetime(datet.year, datet.month, datet.day, datet.hour, datet.minute + 10) optionBid.pop(0) optionAsk.pop(0) optionVol.pop(0) stockOptVol = funcRetrieveStockOptionVolatility() stockBid = stockOptVol[0] stockAsk = stockOptVol[1] optionBid.append(stockOptVol[2]) optionAsk.append(stockOptVol[3]) optionVol.append(stockOptVol[5]) # Trade if required if tradeAtTimeTau == True or tradeAtTimeTwoTau == True: # sell if tradeAtTimeTau == True: portfolio += min(optionAsk[2],sellingPriceAtTimeTau) * 140 # sell 140 options bought 10mn ago tradeAtTimeTau = tradeAtTimeTwoTau sellingPriceAtTimeTau = sellingPriceAtTimeTwoTau sellingPriceAtTimeTwoTau = false else: # forecast the option when no trading # Interpolation coefa = funcQuadraticInterpolationCoef(optionAsk) # quadratic interpolation of the option ask price coefb = funcQuadraticInterpolationCoef(optionBid) # quadratic interpolation of the option bid price coefs = funcQuadraticInterpolationCoef(optionVol) # quadratic interpolation of the volatility sigma sa = stockAsk # stock ask price sb = stockBid # stock bid price ds = (sa - sb) / Ns # stock grid step for k in range (0, Ns+1): # fill the matrix and the vector for j in range (0, Nt+1): Atemp = np.zeros( dimA ) btemp = np.zeros( dimb ) print("k = {k}, j = {j}".format(k=k,j=j)) if k == 0: Atemp[ k*(Nt+1)+j, k*(Nt+1)+j ] = 1 btemp[ k*(Nt+1)+j ] = funcUab(j*dt,coefb) elif k == Ns: Atemp[ k*(Nt+1)+j, k*(Nt+1)+j ] = 1 btemp[ k*(Nt+1)+j ] = funcUab(j*dt,coefa) elif j == 0: Atemp[ k*(Nt+1)+j, k*(Nt+1)+j ] = 1 btemp[ k*(Nt+1)+j ] = funcF( k*ds+sb, sa, sb, funcUab(j*dt,coefa), funcUab(j*dt,coefb) ) elif j == Nt: # do nothing pass else: # main case akj = 0.5*(255*13*3)* funcUab(j*dt, coefs)**2 * (k*ds + sb)**2 dts = (twotau-dt)/Nt * (sa-sb-ds)/Ns #---------- #----- Integral of the generator L #---------- #----- time derivative #---------- Atemp[ (k+0)*(Nt+1)+(j+1), (k+0)*(Nt+1)+(j+1) ] = dts / dt**2 # k,j+1 ~ k,j+1 Atemp[ (k+0)*(Nt+1)+(j-1), (k+0)*(Nt+1)+(j-1) ] = dts / dt**2 # k,j-1 ~ k,j-1 #----- Atemp[ (k+0)*(Nt+1)+(j+1), (k+0)*(Nt+1)+(j-1) ] = - dts / dt**2 # k,j+1 ~ k,j-1 Atemp[ (k+0)*(Nt+1)+(j-1), (k+0)*(Nt+1)+(j+1) ] = - dts / dt**2 # k,j-1 ~ k,j+1 #---------- #----- stock derivative #---------- Atemp[ (k+1)*(Nt+1)+(j+0), (k+1)*(Nt+1)+(j+0) ] = akj**2 * dts / ds**4 # k+1,j ~ k+1,j Atemp[ (k+0)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j+0) ] = 4 * akj**2 * dts / ds**4 # k,j ~ k,j Atemp[ (k-1)*(Nt+1)+(j+0), (k-1)*(Nt+1)+(j+0) ] = akj**2 * dts / ds**4 # k-1,j ~ k-1,j #----- Atemp[ (k+1)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j+0) ] = -2 * akj**2 * dts / ds**4 # k+1,j ~ k,j Atemp[ (k+0)*(Nt+1)+(j+0), (k+1)*(Nt+1)+(j+0) ] = -2 * akj**2 * dts / ds**4 # k,j ~ k+1,j #----- Atemp[ (k-1)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j+0) ] = -2 * akj**2 * dts / ds**4 # k-1,j ~ k,j Atemp[ (k+0)*(Nt+1)+(j+0), (k-1)*(Nt+1)+(j+0) ] = -2 * akj**2 * dts / ds**4 # k,j ~ k-1,j #----- Atemp[ (k+1)*(Nt+1)+(j+0), (k-1)*(Nt+1)+(j+0) ] = akj**2 * dts / ds**4 # k+1,j ~ k-1,j Atemp[ (k-1)*(Nt+1)+(j+0), (k+1)*(Nt+1)+(j+0) ] = akj**2 * dts / ds**4 # k-1,j ~ k+1,j #---------- #----- time and stock derivatives #---------- Atemp[ (k+0)*(Nt+1)+(j+1), (k+1)*(Nt+1)+(j+0) ] = akj * dts / (dt*ds**2) # k,j+1 ~ k+1,j Atemp[ (k+1)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j+1) ] = akj * dts / (dt*ds**2) # k+1,j ~ k,j+1 #----- Atemp[ (k+0)*(Nt+1)+(j-1), (k+1)*(Nt+1)+(j+0) ] = - akj * dts / (dt*ds**2) # k,j-1 ~ k+1,j Atemp[ (k+1)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j-1) ] = - akj * dts / (dt*ds**2) # k+1,j ~ k,j-1 #---------- Atemp[ (k+0)*(Nt+1)+(j+1), (k+0)*(Nt+1)+(j+0) ] = -2 * akj * dts / (dt*ds**2) # k,j+1 ~ k,j Atemp[ (k+0)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j+1) ] = -2 * akj * dts / (dt*ds**2) # k,j ~ k,j+1 #----- Atemp[ (k+0)*(Nt+1)+(j-1), (k+0)*(Nt+1)+(j+0) ] = 2 * akj * dts / (dt*ds**2) # k,j-1 ~ k,j Atemp[ (k+0)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j-1) ] = 2 * akj * dts / (dt*ds**2) # k,j ~ k,j-1 #---------- Atemp[ (k+0)*(Nt+1)+(j+1), (k-1)*(Nt+1)+(j+0) ] = akj * dts / (dt*ds**2) # k,j+1 ~ k-1,j Atemp[ (k-1)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j+1) ] = akj * dts / (dt*ds**2) # k-1,j ~ k,j+1 #----- Atemp[ (k+0)*(Nt+1)+(j-1), (k-1)*(Nt+1)+(j+0) ] = - akj * dts / (dt*ds**2) # k,j-1 ~ k-1,j Atemp[ (k-1)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j-1) ] = - akj * dts / (dt*ds**2) # k-1,j ~ k,j-1 #---------- #---------- #----- Regularisation term - using alpha = 0.01 #---------- #---------- #----- H2 norm: 0 derivative #---------- Atemp[ (k+0)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j+0) ] += alpha # k,j ~ k,j #----- coef = funcF( k*ds+sb, sa, sb, funcUab(j*dt,coefa), funcUab(j*dt,coefb) ) btemp[ (k+0)*(Nt+1)+(j+0) ] += alpha * 2 * coef #---------- #----- H2 norm: time derivative #---------- Atemp[ (k+0)*(Nt+1)+(j+1), (k+0)*(Nt+1)+(j+1) ] += alpha / dt**2 # k,j+1 ~ k,j+1 Atemp[ (k+0)*(Nt+1)+(j-1), (k+0)*(Nt+1)+(j-1) ] += alpha / dt**2 # k,j-1 ~ k,j-1 #----- Atemp[ (k+0)*(Nt+1)+(j+1), (k+0)*(Nt+1)+(j-1) ] += -alpha / dt**2 # k,j+1 ~ k,j-1 Atemp[ (k+0)*(Nt+1)+(j-1), (k+0)*(Nt+1)+(j+1) ] += -alpha / dt**2 # k,j-1 ~ k,j+1 #----- coef = ( funcF( k*ds+sb, sa, sb, funcUab((j+1)*dt,coefa), funcUab((j+1)*dt,coefb) ) \ - funcF( k*ds+sb, sa, sb, funcUab((j-1)*dt,coefa), funcUab((j-1)*dt,coefb) ) ) / dt btemp[ (k+0)*(Nt+1)+(j+1) ] += alpha * 2 * coef btemp[ (k+0)*(Nt+1)+(j-1) ] += - alpha * 2 * coef #---------- #----- H2 norm: stock derivative #---------- Atemp[ (k+1)*(Nt+1)+(j+0), (k+1)*(Nt+1)+(j+0) ] += alpha / ds**2 # k+1,j ~ k+1,j Atemp[ (k-1)*(Nt+1)+(j+0), (k-1)*(Nt+1)+(j+0) ] += alpha / ds**2 # k-1,j ~ k-1,j #----- Atemp[ (k+1)*(Nt+1)+(j+0), (k-1)*(Nt+1)+(j+0) ] += -alpha / ds**2 # k+1,j ~ k-1,j Atemp[ (k-1)*(Nt+1)+(j+0), (k+1)*(Nt+1)+(j+0) ] += -alpha / ds**2 # k-1,j ~ k+1,j #----- coef = ( funcUab(j*dt,coefa) - funcUab(j*dt,coefb) ) / (sa - sb) btemp[ (k+1)*(Nt+1)+(j+0) ] += alpha * 2 * coef btemp[ (k-1)*(Nt+1)+(j+0) ] += - alpha * 2 * coef #---------- #----- H2 norm: stock and time derivative #---------- Atemp[ (k+1)*(Nt+1)+(j+1), (k+1)*(Nt+1)+(j+1) ] += alpha / (ds*dt) # k+1,j+1 ~ k+1,j+1 Atemp[ (k-1)*(Nt+1)+(j+1), (k-1)*(Nt+1)+(j+1) ] += alpha / (ds*dt) # k-1,j+1 ~ k-1,j+1 Atemp[ (k-1)*(Nt+1)+(j-1), (k-1)*(Nt+1)+(j-1) ] += alpha / (ds*dt) # k-1,j-1 ~ k-1,j-1 Atemp[ (k+1)*(Nt+1)+(j-1), (k+1)*(Nt+1)+(j-1) ] += alpha / (ds*dt) # k+1,j-1 ~ k+1,j-1 #---------- Atemp[ (k+1)*(Nt+1)+(j+1), (k-1)*(Nt+1)+(j+1) ] += -alpha / (ds*dt) # k+1,j+1 ~ k-1,j+1 Atemp[ (k+1)*(Nt+1)+(j+1), (k+1)*(Nt+1)+(j-1) ] += -alpha / (ds*dt) # k+1,j+1 ~ k+1,j-1 Atemp[ (k+1)*(Nt+1)+(j+1), (k-1)*(Nt+1)+(j-1) ] += alpha / (ds*dt) # k+1,j+1 ~ k-1,j-1 #----- Atemp[ (k-1)*(Nt+1)+(j+1), (k+1)*(Nt+1)+(j+1) ] += -alpha / (ds*dt) # k-1,j+1 ~ k+1,j+1 Atemp[ (k+1)*(Nt+1)+(j-1), (k+1)*(Nt+1)+(j+1) ] += -alpha / (ds*dt) # k+1,j-1 ~ k+1,j+1 Atemp[ (k-1)*(Nt+1)+(j-1), (k+1)*(Nt+1)+(j+1) ] += alpha / (ds*dt) # k-1,j-1 ~ k+1,j+1 #---------- Atemp[ (k-1)*(Nt+1)+(j+1), (k+1)*(Nt+1)+(j-1) ] += alpha / (ds*dt) # k-1,j+1 ~ k+1,j-1 Atemp[ (k-1)*(Nt+1)+(j+1), (k-1)*(Nt+1)+(j-1) ] += -alpha / (ds*dt) # k-1,j+1 ~ k-1,j-1 #----- Atemp[ (k+1)*(Nt+1)+(j-1), (k-1)*(Nt+1)+(j+1) ] += alpha / (ds*dt) # k+1,j-1 ~ k-1,j+1 Atemp[ (k-1)*(Nt+1)+(j-1), (k-1)*(Nt+1)+(j+1) ] += -alpha / (ds*dt) # k-1,j-1 ~ k-1,j+1 #---------- Atemp[ (k+1)*(Nt+1)+(j-1), (k-1)*(Nt+1)+(j-1) ] += -alpha / (ds*dt) # k+1,j-1 ~ k-1,j-1 #----- Atemp[ (k-1)*(Nt+1)+(j-1), (k+1)*(Nt+1)+(j-1) ] += -alpha / (ds*dt) # k-1,j-1 ~ k+1,j-1 #---------- coef = ( funcUab((j+1)*dt,coefa) - funcUab((j+1)*dt,coefb) \ - funcUab((j-1)*dt,coefa) + funcUab((j-1)*dt,coefb) ) / (dt * (sa - sb)) btemp[ (k+1)*(Nt+1)+(j+1) ] += alpha * 2 * coef / (ds*dt) btemp[ (k-1)*(Nt+1)+(j+1) ] += - alpha * 2 * coef / (ds*dt) btemp[ (k-1)*(Nt+1)+(j-1) ] += - alpha * 2 * coef / (ds*dt) btemp[ (k+1)*(Nt+1)+(j-1) ] += alpha * 2 * coef / (ds*dt) #---------- #----- H2 norm: stock second derivative #---------- Atemp[ (k+0)*(Nt+1)+(j+1), (k+0)*(Nt+1)+(j+1) ] += alpha / dt**4 # k,j+1 ~ k,j+1 Atemp[ (k+0)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j+0) ] += 4 * alpha / dt**4 # k,j ~ k,j Atemp[ (k+0)*(Nt+1)+(j-1), (k+0)*(Nt+1)+(j-1) ] += alpha / dt**4 # k,j-1 ~ k,j-1 #----- Atemp[ (k+0)*(Nt+1)+(j+1), (k+0)*(Nt+1)+(j+0) ] += -2 * alpha / dt**4 # k,j+1 ~ k,j Atemp[ (k+0)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j+1) ] += -2 * alpha / dt**4 # k,j ~ k,j+1 #----- Atemp[ (k+0)*(Nt+1)+(j+1), (k+0)*(Nt+1)+(j-1) ] += alpha / dt**4 # k,j+1 ~ k,j-1 Atemp[ (k+0)*(Nt+1)+(j-1), (k+0)*(Nt+1)+(j+1) ] += alpha / dt**4 # k,j-1 ~ k,j+1 #----- Atemp[ (k+0)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j-1) ] += -2 * alpha / dt**4 # k,j ~ k,j-1 Atemp[ (k+0)*(Nt+1)+(j-1), (k+0)*(Nt+1)+(j+0) ] += -2 * alpha / dt**4 # k,j-1 ~ k,j #---------- #----- H2 norm: time second derivative #---------- Atemp[ (k+1)*(Nt+1)+(j+0), (k+1)*(Nt+1)+(j+0) ] += alpha / ds**4 # k+1,j ~ k+1,j Atemp[ (k+0)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j+0) ] += 4 * alpha / ds**4 # k,j ~ k,j Atemp[ (k+1)*(Nt+1)+(j+0), (k+1)*(Nt+1)+(j+0) ] += alpha / ds**4 # k-1,j ~ k-1,j #----- Atemp[ (k+1)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j+0) ] += -2 * alpha / ds**4 # k+1,j ~ k,j Atemp[ (k+0)*(Nt+1)+(j+0), (k+1)*(Nt+1)+(j+0) ] += -2 * alpha / ds**4 # k,j ~ k+1,j #----- Atemp[ (k+1)*(Nt+1)+(j+0), (k-1)*(Nt+1)+(j+0) ] += alpha / ds**4 # k,j ~ k,j Atemp[ (k-1)*(Nt+1)+(j+0), (k+1)*(Nt+1)+(j+0) ] += alpha / ds**4 # k,j ~ k,j #----- Atemp[ (k+0)*(Nt+1)+(j+0), (k-1)*(Nt+1)+(j+0) ] += -2 * alpha / ds**4 # k,j ~ k-1,j Atemp[ (k-1)*(Nt+1)+(j+0), (k+0)*(Nt+1)+(j+0) ] += -2 * alpha / ds**4 # k-1,j ~ k,j #---------- coef = ( funcF( k*ds+sb, sa, sb, funcUab((j+1)*dt,coefa), funcUab((j+1)*dt,coefb) ) \ - 2 * funcF( k*ds+sb, sa, sb, funcUab((j+0)*dt,coefa), funcUab((j+0)*dt,coefb) ) \ + funcF( k*ds+sb, sa, sb, funcUab((j-1)*dt,coefa), funcUab((j-1)*dt,coefb) ) ) / dt**2 btemp[ (k+0)*(Nt+1)+(j+1) ] += alpha * 2 * coef / dt**2 btemp[ (k+0)*(Nt+1)+(j+0) ] += - alpha * 4 * coef / dt**2 btemp[ (k+0)*(Nt+1)+(j-1) ] += alpha * 2 * coef / dt**2 #---------- #---------- #----- Boundary de-computation #---------- if k+1 == Ns: Atemp[ (k+1)*(Nt+1)+(j+0), (k+1)*(Nt+1)+(j+0) ] = 0 # k+1,j ~ k+1,j Atemp[ (k+1)*(Nt+1)+(j+1), (k+1)*(Nt+1)+(j+1) ] = 0 # k+1,j+1 ~ k+1,j+1 Atemp[ (k+1)*(Nt+1)+(j-1), (k+1)*(Nt+1)+(j-1) ] = 0 # k+1,j-1 ~ k+1,j-1 btemp[ (k+1)*(Nt+1)+(j+0) ] = 0 # k+1,j btemp[ (k+1)*(Nt+1)+(j+1) ] = 0 # k+1,j+1 btemp[ (k+1)*(Nt+1)+(j-1) ] = 0 # k+1,j-1 if k-1 == 0: Atemp[ (k-1)*(Nt+1)+(j+0), (k-1)*(Nt+1)+(j+0) ] = 0 # k-1,j ~ k-1,j Atemp[ (k-1)*(Nt+1)+(j+1), (k-1)*(Nt+1)+(j+1) ] = 0 # k-1,j+1 ~ k-1,j+1 Atemp[ (k-1)*(Nt+1)+(j-1), (k-1)*(Nt+1)+(j-1) ] = 0 # k-1,j-1 ~ k-1,j-1 btemp[ (k-1)*(Nt+1)+(j+0) ] = 0 # k-1,j btemp[ (k-1)*(Nt+1)+(j+1) ] = 0 # k-1,j+1 btemp[ (k-1)*(Nt+1)+(j-1) ] = 0 # k-1,j-1 if j-1 == 0: Atemp[ (k+0)*(Nt+1)+(j-1), (k+0)*(Nt+1)+(j-1) ] = 0 # k,j-1 ~ k,j-1 Atemp[ (k+1)*(Nt+1)+(j-1), (k+1)*(Nt+1)+(j-1) ] = 0 # k+1,j-1 ~ k+1,j-1 Atemp[ (k-1)*(Nt+1)+(j-1), (k-1)*(Nt+1)+(j-1) ] = 0 # k-1,j-1 ~ k-1,j-1 btemp[ (k+0)*(Nt+1)+(j-1) ] = 0 # k,j-1 btemp[ (k+1)*(Nt+1)+(j-1) ] = 0 # k+1,j-1 btemp[ (k-1)*(Nt+1)+(j-1) ] = 0 # k-1,j-1 #---------- pass print("-----") print("Atemp = ") print(Atemp) print("-----") print("btemp = ") print(btemp) print("-----") print("-----") A = A + Atemp b = b + btemp print("-----") print("A = ") print(A) print("-----") print("b = ") print(b) print("-----") print("-----") input("Press Enter to continue...") # Conjugate gradient algorithm: https://en.wikipedia.org/wiki/Conjugate_gradient_method x = np.zeros(N).reshape(N,1) r = b - np.matmul(A,x) p = r rsold = np.dot(r.transpose(),r) for i in range(len(b)): Ap = np.matmul(A,p) alpha = rsold / np.matmul(p.transpose(),Ap) x = x + alpha * p r = r - alpha * Ap rsnew = np.dot(r.transpose(),r) if np.sqrt(rsnew) < 1e-16: break p = r + (rsnew / rsold) * p rsold = rsnew print("it = ", i) print("rsold = ", rsold) # Trading strategy sm = (sa + sb)/2 if x[Ns/2*(Nt+1)+Nt/2] >= optionAsk[0] + securityMargin: tradeAtTimeTau = True sellingPriceAtTimeTau = x[Ns/2*(Nt+1)+Nt/2] portfolio -= 140 * optionAsk # buy 140 options if x[Ns/2*(Nt+1)+Nt] >= optionAsk[0] + securityMargin: tradeAtTimeTwoTau = True sellingPriceAtTimeTwoTau = x[Ns/2*(Nt+1)+Nt] portfolio -= 140 * optionAsk # buy 140 options pause.until(datet) # Wait 10mn before the next loop pause.until(datet) datet = datetime.datetime.now() # Time should be around 20mn before closure datet = datetime.datetime(datet.year, datet.month, datet.day, datet.hour, datet.minute + 10) if tradeAtTimeTau == True: # sell stockOptVol = funcRetrieveStockOptionVolatility() optionAsk.pop(0) optionAsk.append(stockOptVol[3]) portfolio += min(optionAsk[2],sellingPriceAtTimeTau) * 140 # Wait 10mn more to sell the last options pause.until(datet) # it should be around 10mn before closure if tradeAtTimeTwoTau == True: # sell stockOptVol = funcRetrieveStockOptionVolatility() optionAsk.pop(0) optionAsk.append(stockOptVol[3]) portfolio += min(optionAsk[2],sellingPriceAtTimeTwoTau) * 140 # Market closure 
Don't put money on this as I'm still debugging (I bet you half a bitcoin I have mistaken a few indices in the H_2 norm)... Here is the discretisation formula I used, to copy-paste on latexbase:
\documentclass[12pt]{article} \usepackage{amsmath} \usepackage[latin1]{inputenc} \title{Klibanov algorithm} \author{Discretisation formula} \date{\today} \begin{document} \maketitle Let $$ a_{k,j} = \frac12\sigma(j\delta_\tau)^2\times(255\times13\times3)\times(k\delta_s+s_a)^2, $$ then \begin{alignat*}{3} J_\alpha(u) = & \sum_{k=1}^{N_s} \sum_{j=1}^{N_t} \left \frac{u_{k,j+1} - u_{k,j-1}}{\delta_\tau} + a_{k,j} \frac{u_{k+1,j} - 2u_{k,j} + u_{k-1,j}}{\delta_s^2}\right^2\frac{2\tau - \delta_\tau}{N_t}\frac{s_a - s_b - \delta_s}{N_s}\\ & + \alpha \sum_{k=1}^{N_s} \sum_{j=1}^{N_t} \left u_{k,j} - F_{k,j}\right^2 \\ & \qquad + \left \frac{u_{k,j+1} - u_{k,j-1}}{\delta_t} - \frac{F_{k,j+1} - F_{k,j-1}}{\delta_t}\right^2 \\ & \qquad + \left \frac{u_{k+1,j} - u_{k-1,j}}{\delta_s} - \frac{u_{a,j} - u_{b,j}}{s_a - s_b}\right^2 \\ & \qquad + \left \frac{(u_{k+1,j+1} - u_{k-1,j+1}) - (u_{k+1,j-1} - u_{k-1,j-1})}{\delta_s\delta_t} \right. \\ & \qquad \qquad \left. - \frac{(u_{a,j+1} - u_{b,j+1}) - (u_{a,j-1} - u_{b,j-1})}{(s_a-s_b)\delta_t}\right^2 \\ & \qquad + \left \frac{u_{k,j+1} - 2u_{k,j} + u_{k,j-1}}{\delta_\tau^2} - \frac{F_{k,j+1} - 2F_{k,j} + F_{k,j-1}}{\delta_\tau^2} \right^2 \\ & \qquad + \left \frac{u_{k+1,j} - 2u_{k,j} + u_{k-1,j}}{\delta_s^2}\right^2 \end{alignat*} %% \left \right^2 with $\tau = 1$ unit of time (for example 10mn). \end{document} 
Let me know if you see something wrong... And if you want to contribute, feel free
submitted by thomasbbbb to algotrading [link] [comments]


2020.11.20 22:16 g_f_b_throwaway Paths in a graph that visit the most nodes

Hi everybody,
I tried posting this yesterday but my account wasn't old enough so here's attempt number 2. I'm working on an interesting graph theory problem and was wondering if you all had any insight. Here's the statement of the problem:
Given a weighted, directed graph (V,E) and a threshold T, find the largest subset of V that a path which has cumulative weight less than T and starts and ends on specified nodes (call them p and q) can visit. 
To analyze it with tools I am already familiar with, I've tried to change it to a counting problem:
Given a weighted, directed graph (V,E) a threshold T, and a subset S of the nodes V, count how many paths start at p, end at q, have cumulative weight less than T, and visit all members of S. 
I think I have a way to wrap this information up into a generating function. The approach is as follows:
  1. First, introduce a dummy variable `t`, and also one dummy variable `[; r_i ;]` for every node i.
  2. Let `[; w_{i,j} ;]` be the weight of the edge from node `i` to node `j`. Construct a transition matrix `M` where entry `[; M_{i, j} = t^{w_{i,j}}r_i ;]` if `[; i\neq j ;]` and `[; M_{i,j}=0 ;]` if `[; i=j ;]`.
  3. Compute the right hand side to count all paths: `[; \sum_{n=0}^{\infty}M^n=(I-M)^{-1} ;]`, and take the entry of the matrix that corresponds to those paths that start at node `p` and end at node `q` (e.g., `[; (I-M)^{-1}_{p, q} ;]`, computed by row-reduction if `[; I-M ;]` is singular.)
  4. This entry is a generating function parametrized by t. Expressed as a series, the coefficient of `[; t^n ;]` describes paths with a cumulative weight of `n`. This coefficient looks like a polynomial of the dummy variables with integer coefficients `[; r_i ;]` (e.g., `[; 2r_1 + r_1r_2 + 6r_2r_3^2 ;]`). The monomials in this sum enumerate "crossing profiles" of paths (e.g., the term `[; r_2r_3^2 ;]` refers to paths that pass through node `3` twice and node `2` once), and the coefficient counts how many of those paths there are (since the coefficient is `6`, there are 6 paths that pass through node `3` twice and node `2` once).
The questions I'm left with are 1) first of all, is this argument sound? 2) Is there some way I could simplify this to make it more amenable to programming without using a computer algebra system? For example, using primes instead of dummy variables (I know this approach doesn't actually work out because the matrix multiplication "destroys" the prime factorization of the entries). Alternatively, does this method provide insights on the original problem that I could use to simplify things? (e.g., "such a path with cost less than T exists iff M is nonsingular and ") Nothing immediately strikes me as helpful, but I'm also not well-versed in graph theory, so I'd like to hear from people more experienced than I. 3) Is there a better way to do this?
Thanks for staying with me through this post. I'd love to hear your thoughts!
submitted by g_f_b_throwaway to math [link] [comments]


2020.07.04 09:41 elephantra [Q]: Interactions in Multiple Logistic Regression (Mixed Effect Model)

Hi! First-time poster so hope this is an appropriate question.
I'm analyzing a mixed effect model using multiple logistic regression and found that my interaction is significant (as predicted). I have two fixed effects (age and condition) and one random effect (participant).
I haven't seen a lot of information about testing interactions, but is it possible to test the interaction using polynomial contrasts/dummy codes? I know how to test interactions using contrasts in linear regression, but I'm not sure if its the same way in logistic regression
Thanks for your time!
submitted by elephantra to AskStatistics [link] [comments]


2019.11.22 13:17 OttoMoneyWars2028 A glossary of crypto terms Timothy May sent to the Cypherpunks mailing list, 27 years ago today

From: tcmay@netcom.com (Timothy C. May) Subject: Crypto Glossary Date: Sun, 22 Nov 92 11:50:55 PST
Here's the glossary of crypto terms we passed out in printed form at the first Cypherpunks meeting in September 1992. Some compromises had to be made in going from the printed form to the ASCII of this transmission, so I hope you'll bear with me.
I'm sending it to the entire list because nearly everyone who hears about it says "Is it online?" and wants a copy. If you don't want it, discard it.
I'm not going to be maintaining the "Cypherpunks FAQ," so don't send me corrections or additions.
Enjoy
Tim May

Major Branches of Cryptology (as we see it)

(these sections will introduce the terms in context, though complete definitions will not be given)
Encryption
privacy of messages using ciphers and codes to protect the secrecy of messages DES is the most common symmetric cipher (same key for encryption and decryption) RSA is the most common asymmetric cipher (different keys for encryption and decryption)
Signatures and Authentication
proving who you are proving you signed a document (and not someone else)
Untraceable Mail
untraceable sending and receiving of mail and messages focus: defeating eavesdroppers and traffic analysis DC protocol (dining cryptographers)
Cryptographic Voting
focus: ballot box anonymity credentials for voting issues of double voting, security, robustness, efficiency
Digital Cash
focus: privacy in transactions, purchases unlinkable credentials blinded notes "digital coins" may not be possible
Crypto Anarchy
using the above to evade government, to bypass tax collection, etc. a technological solution to the problem of too much government

Glossary

agoric systems
open, free market systems in which voluntary transactions are central.
Alice and Bob
cryptographic protocols are often made clearer by considering parties A and B, or Alice and Bob, performing some protocol. Eve the eavesdropper, Paul the prover, and Vic the verifier are other common stand-in names.
ANDOS
all or nothing disclosure of secrets.
anonymous credential
a credential which asserts some right or privilege or fact without revealing the identity of the holder. This is unlike CA driver's licenses.
asymmetric cipher
same as public key cryptosystem.
authentication
the process of verifying an identity or credential, to ensure you are who you said you were.
biometric security
a type of authentication using fingerprints, retinal scans, palm prints, or other physical/biological signatures of an individual.
bit commitment
e.g., tossing a coin and then committing to the value without being able to change the outcome. The blob is a cryptographic primitive for this.
blinding, blinded signatures
A signature that the signer does not remember having made. A blind signature is always a cooperative protocol and the receiver of the signature provides the signer with the blinding information.
blob
the crypto equivalent of a locked box. A cryptographic primitive for bit commitment, with the properties that a blobs can represent a 0 or a 1, that others cannot tell be looking whether itUs a 0 or a 1, that the creator of the blob can "open" the blob to reveal the contents, and that no blob can be both a 1 and a 0. An example of this is a flipped coin covered by a hand.
channel
the path over which messages are transmitted. Channels may be secure or insecure, and may have eavesdroppers (or enemies, or disrupters, etc.) who alter messages, insert and delete messages, etc. Cryptography is the means by which communications over insecure channels are protected.
chosen plaintext attack
an attack where the cryptanalyst gets to choose the plaintext to be enciphered, e.g., when possession of an enciphering machine or algorithm is in the possession of the cryptanalyst.
cipher
a secret form of writing, using substitution or transposition of characters or symbols.
ciphertext
the plaintext after it has been encrypted.
code
a restricted cryptosystem where words or letters of a message are replaced by other words chosen from a codebook. Not part of modern cryptology, but still useful.
coin flipping
an important crypto primitive, or protocol, in which the equivalent of flipping a fair coin is possible. Implemented with blobs.
collusion
wherein several participants cooperate to deduce the identity of a sender or receiver, or to break a cipher. Most cryptosystems are sensitive to some forms of collusion. Much of the work on implementing DC Nets, for example, involves ensuring that colluders cannot isolate message senders and thereby trace origins and destinations of mail.
computationally secure
where a cipher cannot be broken with available computer resources, but in theory can be broken with enough computer resources. Contrast with unconditionally secure.
countermeasure
something you do to thwart an attacker.
credential
facts or assertions about some entity. For example, credit ratings, passports, reputations, tax status, insurance records, etc. Under the current system, these credentials are increasingly being cross-linked. Blind signatures may be used to create anonymous credentials.
credential clearinghouse
banks, credit agencies, insurance companies, police departments, etc., that correlate records and decide the status of records.
cryptanalysis
methods for attacking and breaking ciphers and related cryptographic systems. Ciphers may be broken, traffic may be analyzed, and passwords may be cracked. Computers are of course essential.
crypto anarchy
the economic and political system after the deployment of encryption, untraceable e-mail, digital pseudonyms, cryptographic voting, and digital cash. A pun on "crypto," meaning "hipen," and as when Gore Vidal called William F. Buckley a "crypto fascist."
cryptography
another name for cryptology.
cryptology
the science and study of writing, sending, receiving, and deciphering secret messages. Includes authentication, digital signatures, the hiding of messages (steganography), cryptanalysis, and several other fields.
cyberspace
the electronic domain, the Nets, and computer-generated spaces. Some say it is the "consensual reality" described in "Neuromancer." Others say it is the phone system. Others have work to do.
DC protocol, or DC-Net
the dining cryptographers protocol. DC-Nets use multiple participants communicating with the DC protocol.
DES
the Data Encryption Standard, proposed in 1977 by the National Bureau of Standards (now NIST), with assistance from the National Security Agency. Based on the "Lucifer" cipher developed by Horst Feistel at IBM, DES is a secret key cryptosystem that cycles 64-bit blocks of data through multiple permutations with a 56-bit key controlling the routing. "Diffusion" and "confusion" are combined to form a cipher that has not yet been cryptanalyzed (see "DES, Security of"). DES is in use for interbank transfers, as a cipher inside of several RSA-based systems, and is available for PCs.
DES, Security of
many have speculated that the NSA placed a trapdoor (or back door) in DES to allow it to read DES-encrypted messages. This has not been proved. It is known that the original Lucifer algorithm used a 128-bit key and that this key length was shortened to 64 bits (56 bits plus 8 parity bits), thus making exhaustive search much easier (so far as is known, brute-force search has not been done, though it should be feasible today). Shamir and Bihan have used a technique called "differential cryptanalysis" to reduce the exhaustive search needed for chosen plaintext attacks (but with no import for ordinary DES).
differential cryptanalysis the Shamir-Biham
technique for cryptanalyzing DES. With a chosen plaintext attack, they've reduced the number of DES keys that must be tried from about 2^56 to about 2^47 or less. Note, however, that rarely can an attacker mount a chosen plaintext attack on DES systems.
digital cash, digital money
Protocols for transferring value, monetary or otherwise, electronically. Digital cash usually refers to systems that are anonymous. Digital money systems can be used to implement any quantity that is conserved, such as points, mass, dollars, etc. There are many variations of digital money systems, ranging from VISA numbers to blinded signed digital coins. A topic too large for a single glossary entry.
digital pseudonym
basically, a "crypto identity." A way for individuals to set up accounts with various organizations without revealing more information than they wish. Users may have several digital pseudonyms, some used only once, some used over the course of many years. Ideally, the pseudonyms can be linked only at the will of the holder. In the simplest form, a public key can serve as a digital pseudonym and need not be linked to a physical identity.
digital signature
Analogous to a written signature on a document. A modification to a message that only the signer can make but that everyone can recognize. Can be used legally to contract at a distance.
digital timestamping
one function of a digital notary public, in which some message (a song, screenplay, lab notebook, contract, etc.) is stamped with a time that cannot (easily) be forged.
dining cryptographers protocol (aka DC protocol, DC nets)
the untraceable message sending system invented by David Chaum. Named after the "dining philosophers" problem in computer science, participants form circuits and pass messages in such a way that the origin cannot be deduced, barring collusion. At the simplest level, two participants share a key between them. One of them sends some actual message by bitwise exclusive-ORing the message with the key, while the other one just sends the key itself. The actual message from this pair of participants is obtained by XORing the two outputs. However, since nobody but the pair knows the original key, the actual message cannot be traced to either one of the participants.
discrete logarithm problem
given integers a, n, and x, find some integer m such that a^m mod n = x, if m exists. Modular exponentiation, the a^m mod n part, is straightforward (and special purpose chips are available), but the inverse problem is believed to be very hard, in general. Thus it is conjectured that modular exponentiation is a one-way function.
DSS, Digital Signature Standard
the latest NIST (National Institute of Standards and Technology, successor to NBS) standard for digital signatures. Based on the El Gamal cipher, some consider it weak and poor substitute for RSA-based signature schemes.
eavesdropping, or passive wiretapping
intercepting messages without detection. Radio waves may be intercepted, phone lines may be tapped, and computers may have RF emissions detected. Even fiber optic lines can be tapped.
factoring
Some large numbers are difficult to factor. It is conjectured that there are no feasible--i.e."easy," less than exponential in size of number-- factoring methods. It is also an open problem whether RSA may be broken more easily than by factoring the modulus (e.g., the public key might reveal information which simplifies the problem). Interestingly, though factoring is believed to be "hard", it is not known to be in the class of NP-hard problems. Professor Janek invented a factoring device, but he is believed to be fictional.
information-theoretic security "unbreakable"
security, in which no amount of cryptanalysis can break a cipher or system. One time pads are an example (providing the pads are not lost nor stolen nor used more than once, of course). Same as unconditionally secure.
key
a piece of information needed to encipher or decipher a message. Keys may be stolen, bought, lost, etc., just as with physical keys.
key exchange, or key distribution
the process of sharing a key with some other party, in the case of symmetric ciphers, or of distributing a public key in an asymmetric cipher. A major issue is that the keys be exchanged reliably and without compromise. Diffie and Hellman devised one such scheme, based on the discrete logarithm problem.
known-plaintext attack
a cryptanalysis of a cipher where plaintext-ciphertext pairs are known. This attack searches for an unknown key. Contrast with the chosen plaintext attack, where the cryptanalyst can also choose the plaintext to be enciphered.
mail, untraceable
a system for sending and receiving mail without traceability or observability. Receiving mail anonymously can be done with broadcast of the mail in encrypted form. Only the intended recipient (whose identity, or true name, may be unknown to the sender) may able to decipher the message. Sending mail anonymously apparently requires mixes or use of the dining cryptographers (DC) protocol.
minimum disclosure proofs
another name for zero knowledge proofs, favored by Chaum.
mixes
David Chaum's term for a box which performs the function of mixing, or decorrelating, incoming and outgoing electronic mail messages. The box also strips off the outer envelope (i.e., decrypts with its private key) and remails the message to the apress on the inner envelope. Tamper-resistant modules may be used to prevent cheating and forced disclosure of the mapping between incoming and outgoing mail. A sequence of many remailings effectively makes tracing sending and receiving impossible. Contrast this with the software version, the DC protocol.
modular exponentiation
raising an integer to the power of another integer, modulo some integer. For integers a, n, and m, a^m mod n. For example, 5^3 mod 100 = 25. Modular exponentiation can be done fairly quickly with a sequence of bit shifts and aps, and special purpose chips have been designed. See also discrete logarithm.
National Security Agency (NSA)
the largest intelligence agency, responsible for making and breaking ciphers, for intercepting communications, and for ensuring the security of U.S. computers. Headquartered in Fort Meade, Maryland, with many listening posts around the world. The NSA funds cryptographic research and advises other agencies about cryptographic matters. The NSA once obviously had the world's leading cryptologists, but this may no longer be the case.
negative credential
a credential that you possess that you don't want any one else to know, for example, a bankruptcy filing. A formal version of a negative reputation.
NP-complete
a large class of difficult problems. "NP" stands for nondeterministic polynomial time, a class of problems thought in general not to have feasible algorithms for their solution. A problem is "complete" if any other NP problem may be reduced to that problem. Many important combinatorial and algebraic problems are NP-complete: the traveling salesman problem, the Hamiltonian cycle problem, the word problem, and on and on.
oblivious transfer
a cryptographic primitive that involves the probabilistic transmission of bits. The sender does not know if the bits were received.
one-time pad
a string of randomly-selected bits or symbols which is combined with a plaintext message to produce the ciphertext. This combination may be shifting letters some amount, bitwise exclusive-ORed, etc.). The recipient, who also has a copy of the one time pad, can easily recover the plaintext. Provided the pad is only used once and then destroyed, and is not available to an eavesdropper, the system is perfectly secure, i.e., it is information-theoretically secure. Key distribution (the pad) is obviously a practical concern, but consider CD-ROM's.
one-way function
a function which is easy to compute in one direction but hard to find any inverse for, e.g. modular exponentiation, where the inverse problem is known as the discrete logarithm problem. Compare the special case of trap door one-way functions. An example of a one-way operation is multiplication: it is easy to multiply two prime numbers of 100 digits to produce a 200-digit number, but hard to factor that 200-digit number.
P ?=? NP
Certainly the most important unsolved problem in complexity theory. If P = NP, then cryptography as we know it today does not exist. If P = NP, all NP problems are "easy."
paping
sending extra messages to confuse eavesdroppers and to defeat traffic analysis. Also aping random bits to a message to be enciphered.
plaintext
also called cleartext, the text that is to be enciphered.
Pretty Good Privacy (PGP)
Phillip Zimmerman's implementation of RSA, recently upgraded to version 2.0, with more robust components and several new features. RSA Data Security has threatened PZ so he no longer works on it. Version 2.0 was written by a consortium of non-U.S. hackers.
prime numbers
integers with no factors other than themselves and 1. The number of primes is unbounded. About 1% of the 100 decimal digit numbers are prime. Since there are about 10^70 particles in the universe, there are about 10^23 100 digit primes for each and every particle in the universe!
probabilistic encryption
a scheme by Goldwasser, Micali, and Blum that allows multiple ciphertexts for the same plaintext, i.e., any given plaintext may have many ciphertexts if the ciphering is repeated. This protects against certain types of known ciphertext attacks on RSA.
proofs of identity
proving who you are, either your true name, or your digital identity. Generally, possession of the right key is sufficient proof (guard your key!). Some work has been done on "is-a-person" credentialling agencies, using the so-called Fiat-Shamir protocol...think of this as a way to issue unforgeable digital passports. Physical proof of identity may be done with biometric security methods. Zero knowledge proofs of identity reveal nothing beyond the fact that the identity is as claimed. This has obvious uses for computer access, passwords, etc.
protocol
a formal procedure for solving some problem. Modern cryptology is mostly about the study of protocols for many problems, such as coin-flipping, bit commitment (blobs), zero knowledge proofs, dining cryptographers, and so on.
public key
the key distributed publicly to potential message-senders. It may be published in a phonebook-like directory or otherwise sent. A major concern is the validity of this public key to guard against spoofing or impersonation.
public key cryptosystem
the modern breakthrough in cryptology, designed by Diffie and Hellman, with contributions from several others. Uses trap door one-way functions so that encryption may be done by anyone with access to the "public key" but decryption may be done only by the holder of the "private key." Encompasses public key encryption, digital signatures, digital cash, and many other protocols and applications.
public key encryption
the use of modern cryptologic methods to provided message security and authentication. The RSA algorithm is the most widely used form of public key encryption, although other systems exist. A public key may be freely published, e.g., in phonebook-like directories, while the corresponding private key is closely guarded.
public key patents
M.I.T. and Stanford, due to the work of Rivest, Shamir, Adleman, Diffie, Hellman, and Merkle, formed Public Key Partners to license the various public key, digital signature, and RSA patents. These patents, granted in the early 1980s, expire in the between 1998 and 2002. PKP has licensed RSA Data Security Inc., of Redwood City, CA, which handles the sales, etc.
quantum cryptography
a system based on quantum-mechanical principles. Eavesdroppers alter the quantum state of the system and so are detected. Developed by Brassard and Bennett, only small laboratory demonstrations have been made.
reputations
the trail of positive and negative associations and judgments that some entity accrues. Credit ratings, academic credentials, and trustworthiness are all examples. A digital pseudonym will accrue these reputation credentials based on actions, opinions of others, etc. In crypto anarchy, reputations and agoric systems will be of paramount importance. There are many fascinating issues of how reputation-based systems work, how credentials can be bought and sold, and so forth.
RSA
the main public key encryption algorithm, developed by Ron Rivest, Adi Shamir, and Kenneth Adleman. It exploits the difficulty of factoring large numbers to create a private key and public key. First invented in 1978, it remains the core of modern public key systems. It is usually much slower than DES, but special-purpose modular exponentiation chips will likely speed it up. A popular scheme for speed is to use RSA to transmit session keys and then a high-speed cipher like DES for the actual message text.
Description
Let p and q be large primes, typically with more than 100 digits. Let n = pq and find some e such that e is relatively prime to (p - 1)(q - 1). The set of numbers p, q, and e is the private key for RSA. The set of numbers n and e forms the public key (recall that knowing n is not sufficient to easily find p and q...the factoring problem). A message M is encrypted by computing M^e mod n. The owner of the private key can decrypt the encrypted message by exploiting number theory results, as follows. An integer d is computed such that ed =1 (mod (p - 1)(q - 1)). Euler proved a theorem that M^(ed) = M mod n and so M^(ed) mod n = M. This means that in some sense the integers e and d are "inverses" of each other. [If this is unclear, please see one of the many texts and articles on public key encryption.]
secret key cryptosystem
A system which uses the same key to encrypt and decrypt traffic at each end of a communication link. Also called a symmetric or one-key system. Contrast with public key cryptosystem.
smart cards
a computer chip embeped in credit card. They can hold cash, credentials, cryptographic keys, etc. Usually these are built with some degree of tamper-resistance. Smart cards may perform part of a crypto transaction, or all of it. Performing part of it may mean checking the computations of a more powerful computer, e.g., one in an ATM.
spoofing, or masquerading
posing as another user. Used for stealing passwords, modifying files, and stealing cash. Digital signatures and other authentication methods are useful to prevent this. Public keys must be validated and protected to ensure that others don't substitute their own public keys which users may then unwittingly use.
steganography
a part of cryptology dealing with hiding messages and obscuring who is sending and receiving messages. Message traffic is often paped to reduce the signals that would otherwise come from a supen beginning of messages.
symmetric cipher
same as private key cryptosystem.
tamper-responding modules, tamper-resistant modules (TRMs)
sealed boxes or modules which are hard to open, requiring extensive probing and usually leaving ample evidence that the tampering has occurred. Various protective techniques are used, such as special metal or oxide layers on chips, armored coatings, embeped optical fibers, and other measures to thwart analysis. Popularly called "tamper-proof boxes." Uses include: smart cards, nuclear weapon initiators, cryptographic key holders, ATMs, etc.
tampering, or active wiretapping
interfering with messages and possibly modifying them. This may compromise data security, help to break ciphers, etc. See also spoofing.
token
some representation, such as ID cards, subway tokens, money, etc., that indicates possession of some property or value.
traffic analysis
determining who is sending or receiving messages by analyzing packets, frequency of packets, etc. A part of steganography. Usually handled with traffic paping.
transmission rules
the protocols for determining who can send messages in a DC protocol, and when. These rules are needed to prevent collision and deliberate jamming of the channels.
trap messages
dummy messages in DC Nets which are used to catch jammers and disrupters. The messages contain no private information and are published in a blob beforehand so that the trap message can later be opened to reveal the disrupter. (There are many strategies to explore here.)
trap-door
In cryptography, a piece of secret information that allows the holder of a private key to invert a normally hard to invert function.
trap-door one way functions
functions which are easy to compute in both the forward and reverse direction but for which the disclosure of an algorithm to compute the function in the forward direction does not provide information on how to compute the function in the reverse direction. More simply put, trap-door one way functions are one way for all but the holder of the secret information. The RSA algorithm is the best-known example of such a function.
unconditional security
same as information-theoretic security, that is, unbreakable except by loss or theft of the key.
unconditionally secure
where no amount of intercepted ciphertext is enough to allow the cipher to be broken, as with the use of a one-time pad cipher. Contrast with computationally secure.
voting, cryptographic
Various schemes have been devised for anonymous, untraceable voting. Voting schemes should have several properties: privacy of the vote, security of the vote (no multiple votes), robustness against disruption by jammers or disrupters, verifiability (voter has confidence in the results), and efficiency.
zero knowledge proofs
proofs in which no knowledge of the actual proof is conveyed. Peggy the Prover demonstrates to Sid the Skeptic that she is indeed in possession of some piece of knowledge without actually revealing any of that knowledge. This is useful for access to computers, because eavesdroppers or dishonest sysops cannot steal the knowledge given. Also called minimum disclosure proofs. Useful for proving possession of some property, or credential, such as age or voting status, without revealing personal information.
submitted by OttoMoneyWars2028 to Bitcoin [link] [comments]


2019.09.16 04:25 Hope1995x What is a sparse language?

A google search
A sparse language contains only polynomially-many strings of any given length. This is to broad for me to fully understand.
I do know that Tallies is a sparse language and its 1^k. But, I don't see what they mean by polynomially-many strings. For every non-fixed value k. It increases exponentially.

May I get an explanation for dummies?
submitted by Hope1995x to learnmath [link] [comments]


2019.03.30 00:48 Amxela Linked lists and Data abstraction in C++

I have an assignment where I need to incorporate a linked list where I previously used a dynamic array. One of the problems I'm running into is that my linkedlist->next is actually pointing to the previous and I'm not entirely sure on how to fix it. Here is the constructor for the class I'm dealing with.
 poly::poly() { // Pre: None. // Post: A basic "zero" polynomial object is created // with zero terms. terms = new Node; terms->data.coeff = 0; terms->data.exp = 0; terms->link = NULL; }; 
Is there anything shown there that would make it so my linked list is backwards?
If needed I can post more of the code.
EDIT:
Copy function
void poly::copy (const poly & p) { // Pre: p is a valid polynomial. // Post: p is copied into the implicit parameter. int nterms = list_length(p.terms->link); Node *currentptr = p.terms; Node *temp = new Node; for (int i = 0; i < nterms; i++) { temp->data.coeff = currentptr->data.coeff; temp->data.exp = currentptr->data.exp; temp->data.var = currentptr->data.var; currentptr = currentptr->link; temp->link = currentptr; } this->terms = temp; }; 
insert term function:
void poly::InsertTerm (term t) { // Pre: Implicit parameter is a valid polynomial NOT // containing any term with the same exponent as t. // Post: The new term t is inserted into the implicit // parameter polynomial, making sure the terms are in // decreasing order of exponents. In the process, the // polynomial is "expanded" if necessary. int i = list_length(terms); unsigned int e = t.exp; if (i == 0) { list_head_insert(terms, t); } else { list_insert(terms, t); } }; 
read function:
void poly::read () { // Pre: None. // Post: A new value is read into the implicit paramter polynomial, // per instructions as given out first. The terms are stored in // decreasing order of exponents. If necessary, the old value is destroyed. poly temp; Node *TempNode = new Node; char variable; int coefficient; int exponent; cout << "Input a polynomial by first specifying the variable and then the terms in any order." << endl << "Each term is specified by an integer coefficient and" << endl << "a non-negative integer exponent." << endl << "Indicate END by specifying a dummy term with" << endl << "a zero coefficient and/or a negative exponent." << endl; cin >> variable; do { cin >> coefficient; if (coefficient) { cin >> exponent; if (exponent >= 0) { temp.InsertTerm(term(variable, coefficient, exponent)); } } else while (cin && (cin.peek() != '\n')) cin. ignore(); } while (coefficient && (exponent >= 0)); *this = temp; // The assignment operator is being called here! }; 
It will prompt me to insert a polynomial so I will enter x 3 2 2 1 1 0 (which should be 3x^2 + 2x + 1) and it will print 2x + 3x^2. My Group partner created a plus function for a different part of this project and it reversed the polynomial after adding (so if we add 2x+3x^2 and 2x+3x^2 we would get 6x^2+4x) but beforehand the polynomials are backwards.

EDIT 2:
Just occured to me that the write function may be needed
write function:
void poly::write() const//do { // Pre: The implicit parameter is a valid polynomial. // Post: The polynomial represented by the implicit parameter is // printed out on the standard output. The variable is used as stored. int y = list_length(terms); Node *currentptr = this->terms; for(int x = 0; x < y; x++) { if (x == 0) { cout << currentptr->data.coeff << currentptr->data.var << currentptr->data.exp << ""; } else if (currentptr->data.exp == 0 && currentptr->data.coeff > 0) { cout << currentptr->data.sign() << currentptr->data.coeff << ""; } else if (currentptr->data.exp > 0 && currentptr->data.coeff < 0) { cout << currentptr->data.coeff << currentptr->data.var << currentptr->data.exp << ""; } else if (currentptr->data.exp == 1 && currentptr->data.coeff > 0) { cout << currentptr->data.sign() << currentptr->data.coeff << currentptr->data.var << ""; } else if (currentptr->data.exp > 0 && currentptr->data.coeff > 0) { cout << currentptr->data.sign() << currentptr->data.coeff << currentptr->data.var << currentptr->data.exp << ""; } else if (currentptr->data.exp == 1 && currentptr->data.coeff > 0) { cout << currentptr->data.coeff << currentptr->data.var << ""; } else { cout << currentptr->data.coeff << currentptr->data.var << ""; } currentptr = currentptr->link; } cout << endl; }; 

submitted by Amxela to AskProgramming [link] [comments]


http://activeproperty.pl/