Gabriel Barcellos

MaplePrimes Activity


These are replies submitted by Gabriel Barcellos

@mmcdara Working with these expressions involving programming commands may not be the most appropriate, because I need precious control over the generated equations. Would it be possible to use the routine you created assuming that you only work with analytical equations instead of typical programming commands such as "if", "while", etc.?

@C_R 

@dharr

Your comment is quite pertinent and makes sense regarding the value of d=3. This region is a bit more complex to obtain numerical values. For this value, you are walking on the limits of the phase transitions, so for an initial approach, it may not be recommended to start there.

From a mathematical point of view, working with extreme values ​​of "d" tends to make the problem easier (very large positive or positive d). From a graphical point of view, there are fewer regions to be crossed in the dxT diagram. If you want to work with a value of d at the limit of the transitions, use "d" close to 3, but never 3. In addition to greater computational difficulty, you will find that the interpretation of the results is even more difficult.

What I advise is: for a given value of "d", such as d=2.995, find T and the possible magnetization values. Another possibility, but one that we don't usually do as much, is: By setting "T", you find the value of the constant 'd', but this requires a little more skill, not because of the difficulty, but in understanding which line is which line.

Thanks for the comment @C_R

But a small detail may go unnoticed and I want to draw attention to it here. You had mentioned that "I also noticed in my original spreadsheet that all fsolve calls below produce the same result" and then added a photo.

By saying this I interpret that all the output values ​​of the equations are the same regardless of the variable d, at least in these cases (d=2.982, 2.984 etc). Is that what you meant?

If so, that's where the details matter. Maybe in these adopted simplifications I lose precision. Because even though they seem the same (like 2.9876543 and 2.9876555) differing only in the last few places, it's this fine control that I care about and makes the difference.

Would your question be along these lines? It may be that the computational gain doesn't compensate.

@C_R 

Como solicitado eu anexo a rotina contendo o spin 3/2. Você saberia me dizer se houve algum progresso em sua tentativa?

O que eu venho tentando fazer e que funciona: Literalmente sentar e esperar e apesar de parecer irônico, que não é minha intenção, deixo o maple rodando toda a madrugada para obter alguns novos pontos. Em termos de velocidade houve poucas melhorias.

Moderator machine translated:

As requested I attach the routine containing spin 3/2. Could you tell me if there was any progress in your attempt?

What I've been trying to do that works: Literally sit and wait and although it seems ironic, which is not my intention, I leave the maple running all night to get some new points. In terms of speed there has been little improvement.

@C_R 

Even though the program provided by you in terms of equations containing summations is really fast to run, it is necessary to compare the point-by-point values ​​obtained by it with those of the original routine. Furthermore, it is necessary to analyze whether both expressions containing summation explicitly reproduce the physics of the system. It is worth remembering that even though we are looking for a mathematical solution, this problem is of physical interest, the answers must be linked to reality and the physics of the problem must be respected.

Regarding the spins 3/2 routine, I am attaching it to this comment. I hope that with the simplified version you can help in solving the problem.

Campo_Médio_spin_3_2_-_Forum.mw

@C_R @dharr

Two questions that have been raised in the last few hours.

"Is it possible that instead of specifying d and finding T, you can specify T and find d?"

Yes, it is perfectly possible to make this exchange between variables, there is no problem at all for the numerical values ​​to provide values ​​of T and find d or vice versa. What is more interesting from a conceptual point of view is: knowing the interval where the values ​​of d exist in the first order transitions (where in the original Maple routine it is written as m(7/2)->m(5/2), for example) makes it easier to determine T based on the values ​​of d. In other words, we have exactly the values ​​where the curve exists, so we just need to guess the values ​​of d and find T.

The opposite is not simple, since it is not possible to say from which value of T the curve will exist. This analysis is based on a previous implicit plot

@CR the example for spin 3/2 compiles perfectly, there is little or no delay in the conventional method (which is what I was doing), so I think that even if we applied the new techniques to them we would see little improvement. Problems arise in spin 5/2 and onwards. Z0 is the same function for all spins, what changes are the values ​​that S0 can assume.

I could try to solve it in other programs other than Maple, but from a theoretical point of view, knowing the expressions analytically (although very large) is fundamental, as it allows me to control all the data in the problem.

Would it be a good justification to approximate the exponentials by polynomial terms via series expansion at least in some cases?

@dharr @C_R @um homem 

Your observations were very important and I thank you in advance for taking the time to address them. Let's answer the questions raised in my last answer so far.

I confess that in this comment "@acer suggested something I have no experience with. If I interpret it correctly, he means to convert the equations for fsolve into procedures. By doing so, evalhf can be applied to the procedure and functions within the procedure, and code can be generated with the Compiler:-Compile command.

Attched is a way that uses the codegen library to generate procedures. It is about 2 times faster. Since it takes 5 minutes to optimize the procedures, it is advisable to define and optimize the procedures only once, including d as a fourth parameter. I did not do this due to lack of time." I am as lost as you, I do not know how to implement these observations.

When referring to the values ​​of beta and J specifically for these cases they are equal to one, but if they are something very particular and specific, they are usually not.

I'll take a look at the Draguilev method. But I need this method is good for my aplication.

@C_R   I also confess that I had some doubts about the two files you sent, they're a bit different from what I'm used to doing

About the comment "The bottleneck seems to be the evaluation of the large number of exponentials. If I reformulate this as a (nearly) polynomial in three variables, many of the calculations run much faster. For example, one of the fsolve that took 84 minutes now takes 14 s. For the graphs, I created a procedure, but I only tested it in low resolution and didn't compare the time. Unfortunately, in fsolve calls where T is an output, this method doesn't work, and I think that's the case you're most interested in." It's a very interesting case, the way you treated the equations is really cool. But is there really no possibility of treating T as an output? Could you tell me why this method doesn't work in this case?

@C_R 

Just for your information, I bring the graph of the CPU occupied at the beginning in reference to the accounts and at the end, when the equation that I call Z0 is compiled, note however, that background applications are open.

Another interesting point is that we previously discussed our processor, which you even suggested comparing with the one available for your work. For that very specific case, the difference was minimal, with very close values. Now notice this difference in the Z0 function. It is almost 7 minutes. The 4 minutes appear because the routine was compiled on your PC and saved. But this only implies that your computer processes faster, however, it still takes considerably longer.

I have a few more questions, if you don't mind answering them.

Would programming in another language make everything faster? As you can see in the program, I need multiple points and each one takes approximately 60 minutes.

I don't know the commands suggested by Acer, could you help me implement them?

From what I understand, the commands in the routine attached by you only show the processing time.

Thank you for the exchange of information

@C_R 

If you want the routine I sent in response to another question above, feel free to try running it on your computer, but it may take a while (or maybe you are already used to evaluating large systems)

@acer 

I am sending you the Maple program and if you can help me I would be extremely grateful.

The program is quite simplified, containing only one of the sequences of equations, but I believe it is sufficient to understand the problem.

.Campo_Médio_spin_7_2_-_Forum.mw

@C_R 

Below I bring the same command lines suggested by you, look at the similarity of the parameters, there is not much difference. Remember that for this I use an Intel Core i5-7200U 2.5GHz with 8 gigabytes of RAM. In addition, my computer was purchased in February 2019. If you want to test my routine, I can send you the program and you can compare how long it takes to run.

My question is: how slow would it be to process the calculations?

@Christopher2222 

Look at the answer I gave to the second question, could you help me solve it? Or do you think that just increasing the memory is enough?

Could you tell me how, for example, each term influences processing? For example, memory helps in this part, processor in this, video card in this, etc.

@acer I will try to be as complete as possible in my answers

In this case, there are only three fsolve commands, but I could add more. The problem is that even with a single command, the program takes almost an hour to solve, and in this case, with three commands, it takes an absurd amount of time.

The systems are huge (like really big), and they almost exceed the limit of an equation that it can calculate. Regarding polynomials, I cannot say whether all the terms are , but yes, some terms contain polynomials, but for the most part they are reduced to exponential terms or relations containing exponentials.

Well, about memory... it calculates the time spent solving the equation, and next to it, it shows the memory used (if it is in relation to that which you are referring to). In fact, it is not much, but for me, compared to what I was already doing, it has increased considerably. In this case, I justify the increase in memory by the increase in the size of the system (the explanation I was able to find).

about your comment "But you may very well be able to run such multiple calls of fsolve simultaneously and in parallel (even stopping all of them once any root is found) using the Grid or Threads packages. In this situation, getting the fastest CPU with multiple cores may be more profitable." what would be the most interesting way to calculate or work with these packages? how would they help me?

about this comment "Maple GUI resources can be suppressed by suppressing/avoiding large 2D Math output or input/plots, and so the computational time for your numeric rootfinding is likely unrelated to the RAM/GUI interaction." I really have trouble plotting 2D graphs for my equations, they take as long as my systems, could it be because of the size and number of terms?

So what is the best way to get around the problem?

If you want I can send you the maple for you to take a look at, the routine is ready, all I have left is to reduce the computational time of calculation.

@acer 
Obviously not, what I'm saying is: if there are 3000 pixels on a 30 cm screen and 2000 on another screen, for example, the size command will specify different sizes in cm, because where there are 3000 pixels in the width, the figure will occupy a smaller size in cm. When exported in PDF it makes a difference. Test it on an old screen and another in 4k or 8k and you will see the difference. Therefore, the "size" command is not universal, it depends on the computer.

@acer You don't have any other solution than this, right?

@acer 

"Size is not a good option" (answer title) because the ppi command causes discrepancies between different computers. For example, my computer with a higher resolution would result in a smaller graph size compared to one with a lower resolution.

What we have is: given a command for a single graph, I need to specify the width and height to be fixed so that it is INDEPENDENT OF THE COMPUTER THAT WILL BE USED to run the Maple routines.

Once you have the graph in hand, you export it in PDF format (for example) to be edited in external programs. HOWEVER, FOR THE NETWORK OF COMPUTERS THAT SHARE THE SAME FILE, ALL MUST HAVE THE SAME SIZE.

Do you know how to help me?

1 2 Page 1 of 2