MaplePrimes Posts

MaplePrimes Posts are for sharing your experiences, techniques and opinions about Maple, MapleSim and related products, as well as general interests in math and computing.

Latest Post
  • Latest Posts Feed
  • CREATING GRAPH EQUATION OF "DNA" IN CARTESIAN SPACE USING PARAMETRIC SURFACE EQUATION RUN ON MAPLE SOFTWARE

    ENJOY...

     

    Happy Springtime to all in the MaplePrimes Community! Though some in our community may not live in the northern hemisphere where flowers are beginning to bloom, many will be celebrating April holidays like Ramadan, Passover, and Easter.

    One of my favorite springtime activities is decorating eggs. Today, the practice is typically associated with the Christian holiday of Easter. However, painted eggs have roots in many cultures.

    For over 3,000 years, painting eggs has been a custom associated with the holiday of Nowruz, or Persian New Year, during the spring equinox. Furthermore, in the Bronze Age, decorated ostrich eggs were traded as luxury items across the Mediterranean and Northern Africa. Dipped eggs have also played an important role in the Jewish holiday of Passover since the 16th century.

    To celebrate this tradition, I would like to invite all of the Maplesoft community to create a decorated egg of their own with the Easter Egg Art Maple Learn document. In this document, an ovoid egg equation is used to define the shape of an egg. 



    The ovoid egg equation mimics the shape of a typical hen’s egg. Each bird species lays differently shaped eggs. For example, an ostrich’s egg is more oblong than an owl’s, and an owl’s egg is rounder than a goose’s. Surprisingly, every egg can be described by a single equation with four parameters:



    Learn more about this equation and others like it with John May’s Egg Formulas Maple Learn document.

    The Easter Egg Art document includes 9 different decorative elements; users can change the color, position, and size of each in order to create their own personal egg! The egg starts out looking like this:



    In just a couple of minutes, you can create a unique egg. Have fun exploring this document and share a screenshot of your egg in the comments below!  Here’s one I made:


    How to Create Graph Equation of Water Drop Wave in Cartesian Space using single Implicit Function only run by Maple Software

    The Equation is:   z = - cos( (x2+y2)0.5 - a)  with paramater a is moving form 0 to 2pi 

    Enjoy...

    Plese Click link below to see full equation in Maple software

    Water_Drop_Wave.mw

    MapleFlow is showing the exact same icon in the taskbar as Maple when both are open.  Be nice if they were slightly different.

     

    Several studies, such as “Seeing and feeling volumes: The influence of shape on volume perception”, have shown that people have a tendency to overestimate the volume of common objects, such as glasses and containers, that are tall and thin and underestimate those that are short and wide; this phenomenon is called “elongation bias”. 

     

    Sue Palmberg, an instructor at Edwin O. Smith High School, created and shared with us a lab activity for students to design a glass in Maple and use volumes of revolution to determine the amount of liquid it can hold. This lab was then turned into this Maple Learn document: Piecewise Volumes of Revolution Activity.

     

    Use this document to create your own glass or goblet shape and determine its volume. Simply create a piecewise function that will define the outside shape of your glass between your chosen bounds and another piecewise function to define the hollowed-out part of your creation. The document will graph the volumes of revolution that represent your glass and calculate the relevant volume integral for you.

     

    Here is my own goblet-shaped creation: 

    I used this piecewise function to define it:

    After creating the outline of my goblet, I constructed a function for the hollow part of the goblet – the part that can actually hold liquid.

     

     

    Using Context Panel operations and the volume integral provided by the document, I know that the volume of the hollow part of my goblet is approximately 63.5, so my goblet would hold around 63.5 units3 of liquid when full.

    Create your own goblets of varying shapes and see if their volumes surprise you; elongation bias can be tricky! For some extra help, check out the Piecewise Functions and Plots and Solids of Revolution - Volume Derivation documents!

    Creating Graph Equation of An Apple in Cartesian Space using single Implicit Function only run by Maple software

    Enjoy...

    Please click the link below to see full equation on Maple file:

    2._Apel_3D_A.mw

     

    Creating Graph Equation of A Candle on Cartesian Plane using single Implicit Function only run by Maple software

    Enjoy...

    3D_Candle.mw

    Sea_Shells.mw

     

     

     

    Today I'm very greatfull to have Inspiration to create Graph Equation of 3D Candle in Cartesian Space using single 3D Implicit Function only, run by Maple software.

    Enjoy... 

    Candle_1.mw

     

    Today I got an inspiration to create graph equation of "Petrol Truck" using only with Single Implicit Equation in Cartesian space run by Maple Software

    Maple software is amazing...

    Enjoy...

     

    CREATING 3D GRAPH EQUATION OF BACTERIOPHAGE USING ONLY WITH SINGLE IMPLICIT EQAUTION IN CARTESIAN SPACE RUN BY MAPLE SOFTWARE

    MAPLE SOFTWARE IS AMAZING...

    ENJOY...

     

    GRAPH EQUATION OF A FEATHER

    AND THE EQUATION IS:

    ENJOY...

    I like this Equation and post it because it is so beautiful...

    Click this link below to see full equation and download the Maple file: 

    Bulu_Angsa_3.mw

     

    GRAPH EQUATION OF "383" CREATED BY DHIMAS MAHARDIKA

    ENJOY...

    with(plots):

    DHIMAS MAHARDIKA EQUATION

    plots:-implicitplot(15-8.*cos(y)^(79/2)-32.*cos(y)^(37/2)+96.*cos(y)^(33/2)-96.*cos(y)^(29/2)+4.*cos(x)^(61/2)+4.*cos(x)^(31/2)-12.*cos(x)^(27/2)+12.*cos(x)^(23/2)+24.*cos(y)^29-48.*cos(y)^27+16.*cos(y)^8-64.*cos(y)^6+96.*cos(y)^4-4.*cos(x)^(19/2)-6.*cos(x)^19-4.*cos(x)^(57/2)+32.*cos(y)^(25/2)+24.*cos(y)^25+8.*cos(y)^(75/2)-cos(x)^38+cos(y)^50-64.*cos(y)^2+4.*cos(x)^2-6.*cos(x)^4+4.*cos(x)^6-cos(x)^8+12.*cos(x)^21-6.*cos(x)^23, x = -15 .. 15, y = -15 .. 15, numpoints = 50000, thickness = 4, colour = blue)

     

    NULL

    Download 383.mw

     

    Drawing Eifel Tower using Implicit Equation in Cartesian Space 

     

    The recent Maple 2023 release comes with a multitude of new features, including a new Canvas Scripting Gallery full of templates for creating interactive Maple Learn documents.

    The Maple Learn Scripting Gallery can be accessed through Maple, by searching “BuildInteractiveContent Maple2023” in the search bar at the top of the application and clicking on the only result that appears. This will bring you to the help page titled “Build and Share Interactive Content”, which can also be found by searching “scripting gallery” in the search bar of a Maple help page window. The link to the Maple Learn Scripting Gallery is found under the “Canvas Scripting” section on this help page and clicking on it will open a Maple workbook full of examples and templates for you to explore.

    The interactive content in the Scripting Gallery is organized into five main categories – Graphing, Visualization, Quiz, Add-ons and Options, and Applications Optimized for Maple Learn – each with its own sub-categories, templates, and examples.

    One of the example scripts that I find particularly interesting is the “Normal Distribution” script, under the Visualizations category.

     

     

    All of the code for each of the examples and templates in the gallery is provided, so we can see exactly how the Normal Distribution script creates a Maple Learn canvas. It displays a list of grades, a plot for the grade distribution to later appear on, math groups for the data’s mean and variance, and finally a “Calculate” button that runs a function called UpdateStats.

    The initial grades loaded into the document result in the below plot, created using Maple’s DensityPlot and Histogram functions, from the Statistics package. 




     

    The UpdateStats function takes the data provided in the list of grades and uses a helper function, getDist, to generate the new plot to display the data, the distribution, the mean, and the variance. Then, the function uses a Script object to update the Maple Learn canvas with the new plot and information.

    The rest of the code is contained in the getDist function, which uses a variety of functions from Maple’s Statistics package. The Normal Distribution script takes advantage of Maple’s ability to easily calculate mean and variance for data sets, and to use that information to create different types of random variable distributions.

    Using the “Interactive Visualization” template, provided in the gallery, many more interactive documents can be created, like this Polyhedra Visualization and this Damped Harmonic Oscillator – both from the Scripted Gallery or like my own Linear Regression: Method of Least Squares document.


     

    Another new feature of Maple 2023 is the Quiz Builder, also featured in the Scripting Gallery. Quizzes created using Quiz Builder can be displayed in Maple or launched as Maple Learn quizzes, and the process for creating such a quiz is short.

    The QuizBuilder template also provides access to many structured examples, available from a dropdown list:


    As an example, check out this Maple Learn quiz on Expected Value: Continuous Practice. Here is what the quiz looks like when generated in Maple:


     

    This quiz, in particular, is “Fill-in the blank” style, but Maple users can also choose “Multiple Choice”, “True/False”, “Multiple Select”, or “Multi-Line Feedback”. It also makes use of all of the featured code regions from the template, providing functionality for checking inputted answers, generating more questions, showing comprehensive solutions, and providing a hint at the press of a button.

    Check out the Maple Learn Scripting Gallery for yourself and see what kinds of interactive content you can make for Maple and Maple Learn!

     

    Maple 2023: The colorbar option for contour plots does not work when used with the Explore command. See the example below.

    No_colorbar_when_exploring_contour_plots.mw
     

    Just installed Maple 2023 on Macbook Air M1.

    Maple Tasks dosn't work :-(

    /


    This post is closely related to this one Centered Divided Difference approximations whose purpose is to build Finite-Difference (FD) approxmation schemes.
    In this latter post I also talked, without explicitely naming it, about Truncation Error, see for instance https://en.wikiversity.org/wiki/Numerical_Analysis/Truncation_Errors.

    I am foccusing here on a less known concept named "Equivalent Equation" (sometimes named "Modified Equation").
    The seminal paper (no free acccess, which is surprising since it was first published 50 years ago) is this one by Warming and Hyett https://www.sciencedirect.com/science/article/pii/0021999174900114.
    For a scholar example you can see here https://guillod.org/teaching/m2-b004/TD2-solution.pdf.
    An alternative method to that of Warming and Hyett to derive the Equivalent Equation is given here (in French)
    http://www.numdam.org/item/M2AN_1997__31_4_459_0.pdf (Carpentier et al.)

    I never heard of the concept of Modified Equation applied to elliptic PDEs ; it's main domain of application is advection PDEs (parabolic PDEs in simpler cases).

    Basically, any numerical scheme for solving an ODE or a PDE has a Truncation Error: this means there is some discrepancy between the exact solution of this PDE and the solution the scheme provides.
    This discrepancy depends on the truncation error, in space alone for ODEs or elliptic PDEs, or in space and time for hyperbolic or advection PDEs.

    One main problem with the Truncation Error is that it doesn't enable to understand the impact it will have on the solution it returns. For instance, will this sheme introduce diffusion, antidiffusion, scattering, ...
    The aim of the Modified Equation is to answer these questions by constructing the continuous equation a given numerical scheme solves with a zero truncation error.
    This is the original point of view Warming and Hyett developped in their 1974 paper.

    This is a subject I work on 30 years ago (Maple V), and later in 2010. 
    It is very easy with Maple to do the painstaking development that Warming and Hyett did by hand half a century ago. And it is even so easy that the trick used by Carpentier et al. to make the calculations easier has lost some of its interest.

    Two examples are given in the attched file
    EquivalentEquation.mw

    If a moderator thinks that this post should be merged with Centered Divided Difference approximations, he can do it freely, I won't be offended.
     


    This code enables building Centered Divided Difference (CDD) approximations of derivatives of a univariate function.
    Depending on the stencil we choose we can get arbitrary high order approximations.

    The extension to bivariate functions is based upon what is often named tensorization in numerical analysis: for instance diff(f(x, y), [x, y] is obtained this way (the description here is purely notional)

    1. Let CDD_x the CDD approximation of diff( f(x), x) ) .
      CDD_x is a linear combination of shifted replicates of f(x)
    2. Let s one of this shifted replicates
      Let CDD_y(s) the CDD approximation of diff( s(y), y) ) .
    3. Replace in CDD_x all shifted replicates by their corresponding expression CDD_y(s)


    REMARKS:

    • When I write for instance "approximation of diff(f(x), x)", this must be intended as a short for "approximation of diff(f(x), x) at point x=a"
    • I agree that a notation like, for instance, diff(f(a), a) is not rigourous and that something like a Liebnitz notation would be better. Unfortunately I don't know how to get it when I use mtaylor.
       

    restart:


    CDDF stands for Cendered Divided Difference Formula

    CDDF := proc(f, A, H, n, stencil)
      description "f = target function,\nA = point where the derivatives are approximated,\nH = step,\nn = order of the derivative,\nstencil = list of points for the divided differenceCDDF\n";
      local tay, p, T, sol, unknown, Unknown, expr:

      tay := (s, m) -> convert(
                         eval(
                           convert(
                             taylor(op(0, f)(op(1, f)), op(1, f)=A, m),
                             Diff
                           ),
                           op(1, f)=A+s*H),
                         polynom
                       ):

      p   := numelems(stencil):
      T   := add(alpha[i]*tay(i, p+1), i in stencil):
      T   := convert(%, diff):

      if p > n+1 then
        sol := solve([seq(coeff(T, h, i)=0, i in subsop(n+1=NULL, [$0..p]))], [seq(alpha[i], i in stencil)])[];
      else
        sol := solve([seq(coeff(T, H, i)=0, i in subsop(n+1=NULL, [$0..n]))], [seq(alpha[i], i in stencil)])[];
      end if:

      if `and`(is~(rhs~(sol)=~0)[]) then
        WARNING("no solution found"):
        return
      else
        unknown := `union`(indets~(rhs~(sol))[])[];
        Unknown := simplify(solve(eval(T, sol) = Diff(op(0, f)(A), A$n), unknown));
        sol     := lhs~(sol) =~ eval(rhs~(sol), unknown=Unknown);
        expr    := normal(eval(add(alpha[i]*op(0, f)(A+i*H), i in stencil), sol));
      end if:

      return expr
    end proc:

    Describe(CDDF)


    # f = target function,
    # A = point where the derivatives are approximated,
    # H =
    # step,
    # n = order of the derivative,
    # stencil = list of points for the divided
    # differenceCDDF
    #
    CDDF( f, A, H, n, stencil )
     

     

    # 2-point approximation of diff(f(x), x) | x=a

    CDDF(f(x), a, h, 1, [-1, 1]);
    convert(simplify(mtaylor(%, h=0, 4)), Diff);

    -(1/2)*(f(a-h)-f(a+h))/h

     

    Diff(f(a), a)+(1/6)*(Diff(Diff(Diff(f(a), a), a), a))*h^2

    (1)

    # 3-point approximation of diff(f(x), x$2) | x=a

    CDDF(f(x), a, h, 2, [-1, 0, 1]);
    convert(simplify(mtaylor(%, h=0)), Diff);

    -(-f(a-h)+2*f(a)-f(a+h))/h^2

     

    Diff(Diff(f(a), a), a)+(1/12)*(Diff(Diff(Diff(Diff(f(a), a), a), a), a))*h^2

    (2)

    # 5-point pproximation of diff(f(x), x$2) | x=a

    CDDF(f(x), a, h, 2, [$-2..2]);
    convert(simplify(mtaylor(%, h=0, 8)), Diff);

    -(1/12)*(f(a-2*h)-16*f(a-h)+30*f(a)-16*f(a+h)+f(a+2*h))/h^2

     

    Diff(Diff(f(a), a), a)-(1/90)*(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a))*h^4

    (3)

    # 7-point approximation of diff(f(x), x$2) | x=a

    CDDF(f(x), a, h, 2, [$-3..3]);
    # simplify(taylor(%, h=0, 10));
    convert(simplify(mtaylor(%, h=0, 10)), Diff);

    -(1/180)*(-2*f(a-3*h)+27*f(a-2*h)-270*f(a-h)+490*f(a)-270*f(a+h)+27*f(a+2*h)-2*f(a+3*h))/h^2

     

    Diff(Diff(f(a), a), a)+(1/560)*(Diff(Diff(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a), a), a))*h^6

    (4)

    # 4-point staggered approximation of diff(f(x), x$3) | x=a

    CDDF(f(x), a, h, 3, [seq(-3/2..3/2, 1)]);
    convert(simplify(mtaylor(%, h=0, 6)), Diff);

    -(f(a-(3/2)*h)-3*f(a-(1/2)*h)+3*f(a+(1/2)*h)-f(a+(3/2)*h))/h^3

     

    Diff(Diff(Diff(f(a), a), a), a)+(1/8)*(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a))*h^2

    (5)

    # 6-point staggered approximation of diff(f(x), x$3) | x=a

    CDDF(f(x), a, h, 3, [seq(-5/2..5/2, 1)]);
    # simplify(taylor(%, h=0, 8));
    convert(simplify(mtaylor(%, h=0, 8)), Diff);

    (1/8)*(f(a-(5/2)*h)-13*f(a-(3/2)*h)+34*f(a-(1/2)*h)-34*f(a+(1/2)*h)+13*f(a+(3/2)*h)-f(a+(5/2)*h))/h^3

     

    Diff(Diff(Diff(f(a), a), a), a)-(37/1920)*(Diff(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a), a))*h^4

    (6)

    # 5-point approximation of diff(f(x), x$4) | x=a

    CDDF(f(x), a, h, 4, [$-2..2]);
    convert(simplify(mtaylor(%, h=0, 8)), Diff);

    (f(a-2*h)-4*f(a-h)+6*f(a)-4*f(a+h)+f(a+2*h))/h^4

     

    Diff(Diff(Diff(Diff(f(a), a), a), a), a)+(1/6)*(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a))*h^2

    (7)

    # 7-point approximation of diff(f(x), x$4) | x=a

    CDDF(f(x), a, h, 4, [$-3..3]);
    convert(simplify(mtaylor(%, h=0, 10)), Diff);

    (1/6)*(-f(a-3*h)+12*f(a-2*h)-39*f(a-h)+56*f(a)-39*f(a+h)+12*f(a+2*h)-f(a+3*h))/h^4

     

    Diff(Diff(Diff(Diff(f(a), a), a), a), a)-(7/240)*(Diff(Diff(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a), a), a))*h^4

    (8)


    A FEW 2D EXTENSIONS

    # diff(f(x, y), [x, y]) approximation over a (2 by 2)-point stencil

    stencil := [-1, 1]:

    # step 1: approximate diff(f(x, y), x) over stencil < stencil >

    fx  := CDDF(f(x), a, h, 1, stencil):
    fx  := eval(% , f=(u -> f[u](y))):
    ix  := [indets(fx, function)[]]:

    # step 2: approximate diff(g(y), y) over stencil < stencil > where
    #         g represents any function in fx.

    fxy := add(map(u -> CDDF(u, b, k, 1, stencil)*coeff(fx, u), ix)):

    # step 3: rewrite fxy in a more convenient form

    [seq(u=op([0, 0], u)(op([0, 1], u), op(1, u)), u in indets(fxy, function))]:
    fxy := simplify( eval(fxy, %) );

    convert(mtaylor(fxy, [h=0, k=0]), Diff)

    (1/4)*(f(a-h, b-k)-f(a-h, b+k)-f(a+h, b-k)+f(a+h, b+k))/(k*h)

     

    Diff(Diff(f(a, b), a), b)+(1/6)*(Diff(Diff(Diff(Diff(f(a, b), a), a), a), b))*h^2+(1/6)*(Diff(Diff(Diff(Diff(f(a, b), a), b), b), b))*k^2

    (9)

    # Approximation of diff(f(x, y), [x, x, y, y] a (3 by 3)-point stencil


    stencil := [-1, 0, 1]:

    # step 1: approximate diff(f(x, y), x) over stencil < stencil >

    fx  := CDDF(f(x), a, h, 2, stencil):
    fx  := eval(% , f=(u -> f[u](y))):
    ix  := [indets(fx, function)[]]:

    # step 2: approximate diff(g(y), y) over stencil < stencil > where
    #         g represents any function in fx.

    fxy := add(map(u -> CDDF(u, b, k, 2, stencil)*coeff(fx, u), ix)):

    # step 3: rewrite fxy in a more convenient form

    [seq(u=op([0, 0], u)(op([0, 1], u), op(1, u)), u in indets(fxy, function))]:
    fxy := simplify( eval(fxy, %) );

    convert(mtaylor(fxy, [h=0, k=0], 8), Diff)

    -(2*f(a, b-k)-4*f(a, b)+2*f(a, b+k)-f(a-h, b-k)+2*f(a-h, b)-f(a-h, b+k)-f(a+h, b-k)+2*f(a+h, b)-f(a+h, b+k))/(h^2*k^2)

     

    Diff(Diff(Diff(Diff(f(a, b), a), a), b), b)+(1/12)*(Diff(Diff(Diff(Diff(Diff(Diff(f(a, b), a), a), a), a), b), b))*h^2+(1/12)*(Diff(Diff(Diff(Diff(Diff(Diff(f(a, b), a), a), b), b), b), b))*k^2

    (10)

    # Approximation of diff(f(x, y), [x, x, y] a (3 by 2)-point stencil

    stencil_x := [-1, 0, 1]:
    stencil_y := [-1, 1]:

    # step 1: approximate diff(f(x, y), x) over stencil < stencil >

    fx  := CDDF(f(x), a, h, 2, stencil_x):
    fx  := eval(% , f=(u -> f[u](y))):
    ix  := [indets(fx, function)[]]:

    # step 2: approximate diff(g(y), y) over stencil < stencil > where
    #         g represents any function in fx.

    fxy := add(map(u -> CDDF(u, b, k, 1, stencil_y)*coeff(fx, u), ix)):

    # step 3: rewrite fxy in a more convenient form

    [seq(u=op([0, 0], u)(op([0, 1], u), op(1, u)), u in indets(fxy, function))]:
    fxy := simplify( eval(fxy, %) );

    convert(mtaylor(fxy, [h=0, k=0], 6), Diff)

    (1/2)*(2*f(a, b-k)-2*f(a, b+k)-f(a-h, b-k)+f(a-h, b+k)-f(a+h, b-k)+f(a+h, b+k))/(h^2*k)

     

    Diff(Diff(Diff(f(a, b), a), a), b)+(1/12)*(Diff(Diff(Diff(Diff(Diff(f(a, b), a), a), a), a), b))*h^2+(1/6)*(Diff(Diff(Diff(Diff(Diff(f(a, b), a), a), b), b), b))*k^2

    (11)

    # Approximation of the laplacian of f(x, y)

    stencil := [-1, 0, 1]:

    # step 1: approximate diff(f(x, y), x) over stencil < stencil >

    fx  := CDDF(f(x), a, h, 2, stencil):
    fy  := CDDF(f(y), b, k, 2, stencil):

    fxy := simplify( eval(fx, f=(u -> f(u, b))) + eval(fy, f=(u -> f(a, u))) );

    convert(mtaylor(fxy, [h=0, k=0], 6), Diff)

    (f(a-h, b)-2*f(a, b)+f(a+h, b))/h^2+(f(a, b-k)-2*f(a, b)+f(a, b+k))/k^2

     

    Diff(Diff(f(a, b), a), a)+Diff(Diff(f(a, b), b), b)+(1/12)*(Diff(Diff(Diff(Diff(f(a, b), a), a), a), a))*h^2+(1/12)*(Diff(Diff(Diff(Diff(f(a, b), b), b), b), b))*k^2

    (12)

     


     

    Download CDD.mw

    Recently Mapleprimes user @vs140580  asked here about finding the shortest returning walk in a graph (explained more below). I provided an answer using the labelled adjacency matrix. @Carl Love pointed out that the storage requirements are significant. They can be reduced by storing only the vertices in the walks and not the walks themselves. This can be done surprisingly easily by redefining how matrix multiplication is done using Maple's LinearAlgebra:-Generic package.

    (The result is independent of the chosen vertex.)

    restart

    with(GraphTheory)

    Consider the following graph. We want to find, for a given vertex v, the shortest walk that visits all vertices and returns to v. A walk traverses the graph along the edges, and repeating an edge is allowed (as distinct from a path, in which an edge can be used only once).

    G := AddVertex(PathGraph(5), [6, 7]); AddEdge(G, {{3, 7}, {4, 6}, {6, 7}}); DrawGraph(G, layout = circle, size = [250, 250])

    GRAPHLN(undirected, unweighted, [1, 2, 3, 4, 5, 6, 7], Array(1..7, {(1) = {2}, (2) = {1, 3}, (3) = {2, 4}, (4) = {3, 5}, (5) = {4}, (6) = {}, (7) = {}}), `GRAPHLN/table/2`, 0)

    n := NumberOfVertices(G)

    7

    A := AdjacencyMatrix(G)

    As is well known, the (i, j) entry of A^k gives the number of walks of length k between vertices i and j. The labelled adjacency matrix shows the actual walks.

    Alab := `~`[`*`](A, Matrix(n, n, symbol = omega))

    Matrix(%id = 36893490455680637396)

    For example, the (3,3) entry of Alab^2 has 3 terms, one for each walk of length 2 that starts and ends at vertex 3. The `&omega;__i,j` factors in a term give the edges visited.

    So the algorithm to find the shortest walk is to raise Alab to successively higher powers until one of the terms in the diagonal entry for the vertex of interest has indices for all the vertices.

    Alab^2

    Matrix(%id = 36893490455709504684)

    The expressions for higher powers get very large quickly, so an algorithm that only retains the sets of vertices in each term as a set of sets will use less storage space. So we can consider the following modified labelled adjacency matrix.

    B := Matrix(n, n, proc (i, j) options operator, arrow; if A[i, j] = 1 then {{i, j}} else {} end if end proc)

    Matrix(%id = 36893490455703852204)

    Now we need to modify how we do matrix multiplication, but Maple has the LinearAlgebra:-Generic package to do this. We can redefine addition and multiplication to operate correctly on the sets of sets.

    Consider two sets of sets of vertices for walks.

    a := {{7}, {1, 3}, {2, 6, 7}}; b := {{1, 2}, {2, 3, 8}}

    {{7}, {1, 3}, {2, 6, 7}}

    {{1, 2}, {2, 3, 8}}

    Addition is just combining the possibilities, and set union will do this. And addition of "zero" should add no possibilities, so we take {} as zero.

    `union`(a, b); `union`(a, {})

    {{7}, {1, 2}, {1, 3}, {2, 3, 8}, {2, 6, 7}}

    {{7}, {1, 3}, {2, 6, 7}}

    Multiplication is just combining all pairs by union. Notice here that {7} union {1,3,5} and {1,5} union {3,7} give the same result, but that we do not get duplicates in the set.

    {seq(seq(`union`(i, j), `in`(i, a)), `in`(j, b))}

    {{1, 2, 3}, {1, 2, 7}, {1, 2, 3, 8}, {1, 2, 6, 7}, {2, 3, 7, 8}, {2, 3, 6, 7, 8}}

    The unit for multiplication should leave the set of sets unchanged, so {{}} can be used

    {seq(seq(`union`(i, j), `in`(i, a)), `in`(j, {{}}))}

    {{7}, {1, 3}, {2, 6, 7}}

    And we should check that zero multiplied by a is zero

    {seq(seq(`union`(i, j), `in`(i, a)), `in`(j, {}))}

    {}

    Define these operations for the LinearAlgebraGeneric package:

    F[`=`] := `=`; F[`/`] := `/`; F[`0`] := {}; F[`1`] := {{}}; F[`+`] := `union`; F[`*`] := proc (x, y) options operator, arrow; {seq(seq(`union`(i, j), `in`(i, x)), `in`(j, y))} end proc

    Warning, (in F[*]) `j` is implicitly declared local

    Warning, (in F[*]) `i` is implicitly declared local

    Compare B^2 with Alab^2. We have lost information about the details of the walks except for the vertices visited.

    LinearAlgebra:-Generic:-MatrixMatrixMultiply[F](B, B)

    Matrix(%id = 36893490455680647164)

    So here is a procedure for finding the length of the shortest walk starting and ending at vertex v.

    WalkLength:=proc(G::Graph,v)
      uses GraphTheory;
      local x,y,i,j,vv,A,B,F,n,vertset;
      if IsDirected(G) then error "graph must be undirected" end if;
      if not member(v,Vertices(G),'vv') then error "vertex not in graph" end if;
      if not IsConnected(G) then return infinity end if;
      F[`=`]:=`=`:F[`/`]:=`/`: # not required but have to be specified
      F[`0`]:={}:
      F[`1`]:={{}}:
      F[`+`]:=`union`;
      F[`*`]:=(x,y)->{seq(seq(i union j, i in x), j in y)};
      n:=NumberOfVertices(G);
      vertset:={$(1..n)};
      A:=Matrix(n,n, (i, j)-> if AdjacencyMatrix(G)[i, j] = 1 then {{i, j}} else {} end if);
      B:=copy(A);
      for i from 2 do
        B:=LinearAlgebra:-Generic:-MatrixMatrixMultiply[F](B,A);
      until vertset in B[vv,vv];
      i;
    end proc:

    WalkLength(G, 1)

    10

    NULL

    Download WalksGenericSetOfSets.mw

    In an age where our lives are increasingly integrated online, cybersecurity is more important than ever. Cybersecurity is the practice of protecting online information, systems, and networks from malicious parties. Whenever you access your email, check your online banking, or make a post on Facebook, you are relying on cybersecurity systems to keep your personal information safe. 

    Requiring that users enter their password is a common security practice, but it is nowhere near hacker-proof. A common password-hacking strategy is the brute-force attack. This is when a hacker uses an automated program to guess random passwords until the right one is found. The dictionary attack is a similar hacking strategy, where guesses come from a list like the 10,000 Most Common Passwords.

    The easiest way to prevent this kind of breach is to use strong passwords. First, to protect against dictionary attacks, never use a common password like “1234” or “password”. Second, to protect against brute-force attacks, consider how the length and characters used affect the guessability. Hackers often start by guessing short passwords using limited types of characters, so the longer and more special characters used, the better.

    Using the Strong Password Exploration Maple Learn document, you can explore how susceptible your passwords may be to a brute-force attack. For example, a 6-character password using only lowercase letters and numbers could take as little as 2 seconds to hack.

    Whereas an 8-character password using uppercase letters, lowercase letters, and 10 possible special characters could take more than 60 hours to crack.

    These hacking times are only estimations, but they do provide insight into the relative strength of different passwords. To learn more about password possibilities, check out the Passwords Collection on Maple Learn

    The inverse problem of a mathematical question is often very interesting.

    I'm glad to see that Maple 2023 has added several new graph operations. The GraphTheory[ConormalProduct], GraphTheory[LexicographicProduct], GraphTheory[ModularProduct] and GraphTheory[StrongProduct] commands were introduced in Maple 2023.

    In fact, we often encounter their inverse problems in graph theory as well. Fortunately, most of them can find corresponding algorithms, but the implementation of these algorithms is almost nonexistent.

     

    I once asked a question involving the inverse operation of the lexicographic product.

    Today, I will introduce the inverse operation of line graph operations. (In fact, I am trying to approach these problems with a unified perspective.)

     

    To obtain the line graph of a graph is easy, but what about the reverse? That is to say, to test whether the graph is a line graph. Moreover, if a graph  g is the line graph of some other graph h, can we find h? (Maybe not unique. **Whitney isomorphism theorem tells us that if the line graphs of two connected graphs are isomorphic, then the underlying graphs are isomorphic, except in the case of the triangle graph K_3 and the claw K_{1,3}, which have isomorphic line graphs but are not themselves isomorphic.)

    Wikipedia tells us that there are two approaches, one of which is to check if the graph contains any of the nine forbidden induced subgraphs. 

    Beineke's forbidden-subgraph characterization:  A graph is a line graph if and only if it does not contain one of these nine graphs as an induced subgraph.

    This approach can always be implemented, but it may not be very fast. Another approach is to use the linear time algorithms mentioned in the following article. 

    • Roussopoulos, N. D. (1973), "A max {m,n} algorithm for determining the graph H from its line graph G", Information Processing Letters, 2 (4): 108–112, doi:10.1016/0020-0190(73)90029-X, MR 0424435

    Or: 

    •   Lehot, Philippe G. H. (1974), "An optimal algorithm to detect a line graph and output its root graph", Journal of the ACM, 21 (4): 569–575, doi:10.1145/321850.321853, MR 0347690, S2CID 15036484.


    SageMath can do that: 

       root_graph()

    Return the root graph corresponding to the given graph.

        is_line_graph()

    Check whether a graph is a line graph.

    For example, K_{2,2,2,2} is not the line graph of any graph.

    K2222 = graphs.CompleteMultipartiteGraph([2, 2, 2, 2])
    C=K2222.is_line_graph(certificate=True)[1]
    C.show() # Containing the ninth forbidden induced subgraph.
    

     

    enter image description here

     

    Another Sage example for showing that the complement of the Petersen graph is the line graph of K_5.

    P = graphs.PetersenGraph()
    C = P.complement()
    sage.graphs.line_graph.root_graph(C, verbose=False)
    

     

    (Graph on 5 vertices, {0: (0, 1), 1: (2, 3), 2: (0, 4), 3: (1, 3), 4: (2, 4), 5: (3, 4), 6: (1, 4), 7: (1, 2), 8: (0, 2), 9: (0, 3)})
    

     

    Following this line of thought, can Maple gradually implement the inverse operations of some standard graph operations? 

    Here are some examples:

    •   CartesianProduct
    •   TensorProduct
    •   ConormalProduct
    •   LexicographicProduct
    •   ModularProduct
    •   StrongProduct
    •   LineGraph
    •  GraphPower

    The areas of statistics and probability are my favorite in mathematics. This is because I like to be able to draw conclusions from data and predict the future with past trends. Probability is also fascinating to me since it allows us to make more educated decisions about real-life events. Since we are supposed to get a big snow storm in Waterloo, I thought I would write a blog post discussing conditional probability using the Probability Tree Generator, created by Miles Simmons.

    If the probability of snowfall on any given day during a Waterloo winter is 0.75, the probability that the schools are closed given that it has snowed is 0.6, and the probability that the schools are closed given that it has hasn’t snowed is 0.1, then we get the following probability tree, created by Miles’s learn document:

    From this information we can come to some interesting conclusions:

    What is the probability that the schools are closed on a given day?

    From the Law of total probability, we get:

    Thus, during a very snowy Waterloo winter, we could expect a 0.475 chance of schools being closed on any given day. 

    One of the features of this document is that the node probabilities are calculated. You can see this by comparing the second last step to the number at the end of probability trees' nodes.

    What is the probability that it has snowed given that the schools are closed?

    From Bayes’ Theorem, we get:

    Thus, during a very snowy Waterloo winter, we expect there to be a probability of 0.947 that it has snowed if the schools are closed. 

    We can also add more events to the tree. For example, if the students are happy or sad given that the schools are open:

    Even though we would all love schools to be closed 47.5% of the winter days in Waterloo, these numbers were just for fun. So, the next time you are hoping for a snow day, make sure to wear your pajamas inside out and sleep with a spoon under your pillow that night!

    To explore more probability tree fun, be sure to check out Miles’s Probability Tree Generator, where you can create your own probability trees with automatically calculated node probabilities and export your tree to a blank Maple Learn document. Finally, if you are interested in seeing more of our probability collection, you can find it here!

    First 14 15 16 17 18 19 20 Last Page 16 of 306