acer

32400 Reputation

29 Badges

19 years, 345 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Prashanth Reverse order in the list [x,y] of variables (as the 2nd argument passed to `solve`).

sol := solve([x-y = 10, x+y < 100], [y,x]):

convert(convert(op([1,1],sol),RealRange),relation);

                  And(-infinity <= y, y < 45)

which typesets like -infinity <= y <= 45 in the GUI.

Similarly,

solx := solve([x-y = 10, x+y < 100], [x,y]):
convert(convert(op([1,1],solx),RealRange),relation);

                  And(-infinity <= x, x < 55)

This kind of approach can get difficult, as the example becomes more complicated. This kind of thing has been asked before. It'd be nice if `solve` itself could do the heavy lifting.

It occured to me that obtaining the desired form (from what I got with a mapped int call) might possibly be accomplished with these replacement rules:

> rule1;

         2 (exp(2 a) - exp(2 b))        coth(b) coth(a) - 1
    - ----------------------------- = - -------------------
      (exp(2 b) - 1) (exp(2 a) - 1)        coth(-b + a)    

> rule2;

               coth(b) coth(a) - 1               
               ------------------- = coth(-b + a)
               -coth(a) + coth(b)                

I don't know how to obtain that rule1 except by some rather ad hoc business of adding/subtracting or multiplying/dividing key terms to numerator/denominator, with judicious partial simplification/expansion/etc.

The second one is easy to create programatically,

> convert(expand( coth(a-b) ),coth) = coth(a-b);

               coth(b) coth(a) - 1               
               ------------------- = coth(-b + a)
               -coth(a) + coth(b)                

I didn't actually try to use these, via applyrule, on the summands of the mapped int result for the given example.

@SRINIVASA RAGHAVA Your document appears to be essentially empty.

The only content in your attachment is an execution group whose input is the empty string. Ie,

Typesetting:-mrow(Typesetting:-mi(""), executable = "false", mathvariant = "normal")

 

@SRINIVASA RAGHAVA You haven't succeeded yet. When you use the Big Green Arrow you have to 1) select the file, 2) hit the "Upload" button, and then 3) scroll down in the popup and hit either the "Insert Link" or "Insert Contents" button.

Where is the attachment?

acer

@fzfbrd

Change your,

CrssOfVds := VDS -> ArrayInterpolation(Vds, Crss, V)

to,

CrssOfVds := VDS -> ArrayInterpolation(Vds, Crss, VDS)

And change,

Q := evalf(Int(CrssOfVds, Vdson .. Charge_Vds)

to,

Q := evalf(Int(CrssOfVds, Vdson .. Charge_Vds, epsilon = 0.1e-6))

That gets me a result of 2.96204101072752*10^(-9).

I found that at your default working precision of Digits=10 the epsilon tolerance for the evalf/Int call had to be at least about 1e-8. You can experiment with the working precision and that epsilon tolerance. (I don't know how accurate you want the result to be.)

Alternatively you could (and probably should) change that call to ArrayInterpolation to include the method=spline option. (See my Answer below.) That allows a result to attain even without using the epsilon option of evalf/Int, at default Digits=10.

Ie, on my 64bit Linux version of Maple 2015.0 I am seeing,

CrssOfVds := VDS -> ArrayInterpolation(Vds, Crss, VDS, method = spline):

Q := evalf(Int(CrssOfVds, Vdson .. Charge_Vds));

                 2.96479729742643*10^(-9)

Note that there is a distinction between obtaining a highly accurate numeric estimation of a crude (linear, say) interpolation, versus obtaining a numeric estimation of a higher degree (and likely more correct) interpolation. There's not much point in obtaining many digits of a crude linear interpolation. You'd be better off with a modest number of accurate digits of a higher degree (better fitting) interpolation. Hope that makes sense.

But of course if you are working with experimental data that is likely subject to noise then it may well be that only a few digits can even be meaningful. This might also be a good time to mention that if your data is experimental and subject to noise then you  might also consider the distinction between interpolating the data and smoothing (or fitting it) it. The interpolation schemes discussed here will pass directly through the data points. But it may be that experimental data merely approximates some physical process, in which case an interpolation which directly passes through the data points may not actually give the best approximation of the area under the abstract curve represented by the actual physical process. It may turn out that a smoothing of the data (of which a numeric "fit" is one example) might give a curve whose area better represents that of the actual physical process. In practice the difference may be indiscernable, up to the degree of noise present. Sorry if this is all obvious. You may well be (already) only expecting a few decimal digits to be trusted in any numerical estimate.

Here is my edited worksheet. (...you'll have to change back the definition of ThisDocumentPath, to run it.) 

Fet_Cap_modif.mw

I have seen a similar problem when I have a running worksheet that is minimized to the desktop tray (MS-Windows), and if I have the GUI set up to always ask whether to use a new or shared kernel for each new opened document.

In this situation, when I double-click the Maple launch icon the kernel-query-popup can be present and waiting but suprressed from view.

In this situation I can sometimes simply hit the Enter key and so clear the waiting query-popup.

acer

How did you obtain the Arrays of values, btw?

I ask because if you obtained them from calling dsolve(...,numeric) then we can often do better than ArrayInterpolation (or often any other quadrature approach, after the fact)  by using dsolve itself. That flavour of this question comes up quite a bit.

The choice of most efficient method will depend on how many elements there are in your Arrays, and how many times you need compute a numeric integral. For example, a spline interpolation approach (from CurveFitting, say) can gets somwhat slow as the Array length gets very large, partly beacause of the cost of forming the piecewise (once) and partly on account of the cost of evaluating the piecewise (each time).

acer

@Carl Love Please pardon me while I make a clarifying comment, just in the interest of being clear to the OP:

The first element of the solution returned by DirectSearch:-DataFit is the minimal achieved value of the objective function used for the particular fitting method. And different fitting methods may use a different objective formula as their respective measure.

So it is not generally sensible to compare results obtained from the various fitting methods simply by comparing the magnitude of the first element in each returned solution. The minimal values from different objectives, evaluated at their different optimal parameter points, are not directly comparable. So one cannot just pick the solution which gives the smallest first element.

It thus becomes up to the user to decide what measure of optimality (ie. what objective) may produce a "better" fit.

How does your supervisor feel about an interactive solution using Embedded Components?

acer

@Carl Love Yes, that kind of deferment (userinfo, etc) is part of the unfortunate way that Document Blocks are implemented (the pair of execution  groups, one with output suppressed and the other with input hidden...).

But the OP mentioned printf, and is I recall corectly that particular kind of i/o can usually be obtained asynchronously even in a Document Block. So that's one reason why I asked for more details.

[edit] I must correct myself. In a Document Block it seems that even printf display is deferred. dbdefer.mw

However, the OP has now clarified: this is about a Worksheet, so the above not relevant.

More details might help here. What OS? Is this in a Worksheet and an Execution Group or a Document and a paragraph (Document Block)?

How much output are we talking about? Very many short-ish lines?

Is there any natural amount of output which you actually would like to see, in a block? Or do you want every line? Or only one last line is useful?

Do you have a sample of code that demonstrates the problem, that we can work with?

Have you tried sprintf and redirecting that to a TextArea embedded component? (Just an idea... may not be suitable here, depending on the details above.)

acer

So your problem is with the final call to `solve`, when epsilon is greater than 0?

If so, then you could try using `fsolve` (repeated, with the avoid option built uo) or the `DirectSearch` (3rd party add-on from the Application Center) rather than `solve`. 

Let us know if you need all the roots, or just one real root, or all the real roots, etc.

acer

assumptions, internal remember tables... if the OP wants a clean slate then it seems sensible to restart. The premise that a restart should be avoided because package initialization is onerous seems faulty to me.

If you are loading packages using the mouse and the menubar Tools->Load Package... then I suggest that instead you make the package loading be done explicitly by code in the Document/Worksheet.

You can even paste all the package loading commands (calls to the `with` command) on the same line, so that you can execute them all in one keystroke.

You can even make the Standard GUI insert the actuall command for you, the first time you load the packages in the worksheet. See the menubar item Tools->Options->Display and the checkbox "Expose commands inserted from Load/Unload Package menus". If you check that then subsequent use of Tools->Load Package from the menubar will embed the `with` command call in the document.

acer

First 325 326 327 328 329 330 331 Last Page 327 of 592