acer

32348 Reputation

29 Badges

19 years, 330 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@mweisbr I see now that the bad points are not isolated, which I didn't realize before. I'm not sure how interpolation (including just linear) is going to be justified.

You may be able to smooth the data by convolving the errant points. (ie. something like ImageTools:-Convolution , at least in concept)

The tricky bit can be to convolve (mostly) only the points which are unsatisfactory, because you don't want their bad values affecting other good points nearby. It helps to have a robust and flexible peak detection scheme, in order to convolve only the bad data. Sometimes it's not so hard. It helps if the bad points are quite isolated.

Can you confirm whether the uploaded data is correct?

Once you have the source code in a plaintext file, why not use Maple's read command instead of repeatedly copy & pasting it into the Maple sessions?

@Annonymouse 

line is being used in Kitonum's example as an export of the plottools package. So if you don't want to load that package (and by doing so, rebind the name line) then you'd need to reference it using the long-form plottools:-line .

A:=[1, 2, 3]: B:=[4, 5, 6]:

plots:-display(plottools:-line(A, B,
                               color=red, thickness=2));

I'll also correct Kitonum's first example below. Ie, either A+t*~(B-A) or (1-t)*~A+t*~B . And if you're just going to produce a straight line then you only need two points in a space curve.

A:=[1, 2, 3]: B:=[4, 5, 6]:

plots:-spacecurve(A+t*~(B-A), t=0..1, numpoints=2,
                  color=red, thickness=2);

Also, this seems simple.

A:=[1, 2, 3]: B:=[4, 5, 6]:

plots:-pointplot3d([A,B], style=line,
                   color=red, thickness=2);

@Annonymouse Personally I think you should just add additional details and rephrasings etc to a Comment/Reply on this Question.

If you'd rather edit the Question then it's better to leave the original matter and mark the new as new. It's a reasonable choice.

Adding your own details to an Answer here would be not be clear and most useful, I think.

@Annonymouse Please don't post such a close duplicate as a separate Question. Put the followup details here instead.

@Christian Wolinski 

Sometimes kernel built-in functions like eval will call out to the interpreted Library.

Here's an example where it makes a difference, using Maple 2018.

restart;

a:=2:
trace(`eval/piecewise`):

p:=piecewise(x<a, 1/(x-a), x=a, 22, 33):

eval(p, x=a);

{--> enter \`eval/piecewise\`, args = piecewise(x < 2, 1/(-2+x), x = 2, 22, 33), {x = 2}

expr, eqs := piecewise(x < 2, 1/(-2+x), x = 2, 22, 33), {x = 2}

oper := piecewise

eqs := {x = 2}

eqs := {x = 2}

tmp := table([])

t := x = 2

tmp := {}

i := 1

piecewise(x < 2, 1/(-2+x), x = 2, 22, 33)

false

false

2 < 2

false

3

2 = 2

true

<-- exit \`eval/piecewise\` (now at top level) = 22}

22

restart;

a:=2:
unassign('`eval/piecewise`'): # not reall a good thing to do

p:=piecewise(x<a, 1/(x-a), x=a, 22, 33):

eval(p, x=a);

Error, numeric exception: division by zero

 


Download CW_pw.mw

The given examples all return 22, without emitting an error, in Maple 2018.0.

There are options which make the solver try harder.

restart;
f:=sin(3*x)-cos(7*x)+sin(17*x)+cos(20*x)-sin(67*x):

Optimization:-Maximize(f, x=0..2*Pi, method = branchandbound);
           [3.17327920314180, [x = 4.10406174848180]]
Optimization:-Minimize(f, x=0..2*Pi, method = branchandbound);
          [-4.46537362759898, [x = 1.71553210252725]]

Optimization:-Maximize(f, x=0..2*Pi, method = branchandbound, evaluationlimit=100);
           [3.40373784598076, [x = 4.94786409656254]]
Optimization:-Minimize(f, x=0..2*Pi, method = branchandbound, evaluationlimit=100);
          [-4.82606452700204, [x = 3.59004152499192]]

Of course there will always be examples which make a global optimizer miss the best extreme points. And indeed there are examples for which DirectSearch fails to find the global optima with its own defaults. However I suspect that DirectSearch is stronger in general (and with default options too).

Unfortunately the extra options for method branchandbound are listed only on the help page for NLPSolve and not on that for Maximize/Minimize.

Setting infolevel[Optimization]:=3 or higher shows information on working parameters (default or otherwise).

It seems to work ok for me in Maple 2015 (or 2017), if I use plot(sin(x)/x, legend=sinc(x))  [note minor invalid syntax correction].

By that I mean I see the legend in the examples which use the list assigned to A.

Could you please upload a Worksheet or Document that exhibits the problem?

@tomleslie First thing: ArrayTools:-Append acts in-place on its rtable argument. In the given example it is applied to the value assigned to the parameter Rx1. The parameter Rx1 (of the procedure Sort4) is not being assigned a value or being otherwise used illegally when ArrayTools:-Append acts on its assigned (rtable) value.

Second thing: round-bracket indexing into rtables (Vector, Matrix, and Array), aka "programmer indexing", was introduced in Maple 12 (released 2008). See ?updates,Maple12,programming 

The old (lowercase) matrix and array are both implemented internally using a table.

Several of the old linalg commands have early code like, say,

proc(A)
   B := evalm(A);
   ...(operations on elements of B)...

or similar. Generally the linalg commands did not offer in-place semantics. The newer uppercase Matrix and Array with dense storage are based internally on the newer rtable (ie. rectangular table, because elements are stored in a continguous rectangular block of memory to allow efficient use by external compiled routines).

Unlike a table, an rtable does not have last-name-eval. But an rtable is indeed generally passed by reference (which I suspect contradicts a claim above). No copy is made on a procedure pass because eval works differently on an rtable (hence the later need for rtable_eval).

The example with elementwise sqrt~ works on a full copy because map does so (as a special case), and not because Matrices are in general passed by value.

The reason for this handling of tables is that they are mutable and the behaviour is more generally useful than not. Eg. in-place argument passing, which is not beneficial merely for memory performance.

I recall looking at this before, where some branches of the piecewise are evaluated prematurely. IIRC it's related to stuff around showstat(`eval/piecewise`,15).

@Adam Ledger You identified three steps. Making the "pre execution" action of step 3 to be automatic is the least of the difficulties.

For steps 1 and 3 you could wrap all your commands in a call to a prodecure you write. Eg. B( intended_command_here ) . Making step 3 (or 1) automatic -- without the wrapping call -- is a finesse that could well wait until last.

For step 1 you need to measure the performance of executed commands and record the details in a way that can be analyzed in step 2. You could use commands from the CodeTools package for most of the measurement, I expect.

The central difficulties will be in step 2. The computations in general will not exhibit patterns that could be predicted statistically. You will have to severely restrict the kinds of computations to a relatively few narrow domains if there is to be a hope of successful data mining and predicition as you describe it. So far you have given no indication in this thread of any tight and narrow domain of computation. I suspect that you do not properly conceive of the enormity of what you seem to be asking more generally.

The breadth of the computational domain aside, who is going to devise the data mining of step 2 and the statistical prediction methodology of step 3? That would require programming acumen and, I expect, considerably more detail as to the computational domain.

What have you accomplished so far on this project?

First 255 256 257 258 259 260 261 Last Page 257 of 592