All things Tech with a dash of Geek for good measure!

Controlling power to the N64

In My last post I mentioned that I had yet to get remotely controllable power to the N64. In the end, I ended upgoing with a TCP Smart 13A Plug (because I am in the UK), but it turned out that it didn’t meet my needs as it was only controllable from an app installed on either Android or IOS. It also worried me that it was obviously controlled by an ESP32 microcontroller and the firmware didn’t even attempt to rebrand the hostname. Although this might meet the needs of some (who just want to be able to reboot their N64 from the couch) this was not good, and trying to make it work through FTTT turned out to be a nightmare. Further investigation (using PI-Hole and Fiddler) showed that it was actually a rebranded Tuya device. This also worried be as this uses a Chinese cloud and I have no idea what they are doing with my information.

A quick search on github.com revealed a promising lead, https://github.com/ct-Open-Source/tuya-convert claimed to be able to “break in” and flash the device with custom firmware. Further more, there is a wide community of open source firmwares for these devices. I settled on Tasmona for this and promptly flashed the device. However the documentation was lacking and I ended up using a template hosted elsewhere.

I am now able to use powershell to turn my device on and off using a simple URL on my local network, with no cloud involved. If I want to control it from my phone, I just connect to the integrated webpage and press a button!

A simple fix with no N64 modification required, and only costing £10! I think you will agree, this is far better than other solutions out there!

When developing homebrew or ROM hacks for an N64, an emulator like CEN64 can be used to get a feel for whether it will run on actual hardware. It is also possible to create a CI pipeline to build on each check-in and feedback compiler warnings etc. However, there is nothing like testing and gaining feedback from the actual hardware, which generally means that you need to be in front of it.

To this extent, I have set out to build a fully automated test rig, that allows me to work remotely, whilst still ensuring that what I build is still able to run successfully.

Progress so far:

Computer OS – Since I had one “lying around”, I decided to use a Lenovo ThinkCentre M700. This is powerful enough, compact, efficient and has enough USB inputs for what I currently need. I also decided on Windows 10 running WSL2, partly because it is easier to get to grips with what I actually require, and secondly because I know that some of the hardware I have lying around and plan to use doesn’t play as nice with linux. Once the prototype is fully working, it will be easier to judge if a linux distribution would be a better choice.

Providing access – VNC with a secure password and networked with direct access to the internet is quite a simple task, and so this was setup fairly quickly. I also decided that it was probably a good idea to totally separate this PC from my main LAN in case I let someone borrow its facilities, due to my network mainly using Ubiquiti Unifi, this should have been an easy process, however my network switches are multiple brands, and it took a while to make them play nice with the necessary VLAN’s.

Facilitating the ability to load the ROM onto the cart – I currently have an ED64 V3 and an X7, which both provide a USB interface for loading ROM’s via PC software. I have much more control over the source for the ED64 V3, so for the time being, I decided to use it for the first rig. If required, the flash cart swapped out for a period to provide a specific need, or a second rig could be setup subject to costs. There are a few different implementations of loader software out there, but for the time being I will use my own (available on GitHub), unless it is necessary to swap to a different one.

Returning the output (screen and debug messages) – For the first version (since I had one lying around) I decided to use a Hauppauge 1975 with WinTV 8.5. The S-Video output from the N64 was used to provide the best possible picture. In the future an UltraHDMI mod paired with an Elgato could be used for really good screenshots (or possibly run easier on Linux), but there is a downside, that the picture is more lenient when it comes to displaying NTSC through a PAL console, so might not give the tester the right feedback. It will also be necessary to find a way to take a screenshot or short video via a script.

Controlling the input – this has proved to be quite difficult with bits and pieces I have lying around. It will be reinvestigated in a future revision.

Rebooting the N64 – Seemingly the N64 doesn’t have an ability to reboot, other than using the hard button on the console itself. It would be quite easy to solder a wire to the reset button and control it via GPIO on something like an Arduino (or use later revisions of the UltraHDMI mod). However, it would probably be better to control the N64 power via a network-controlled mains adapter. This provides the added benefit of the N64 being off when no test is taking place (so the PSU does less work given the age of the console) and also ensures a clean slate when booting the ROM. A smart wifi plug has been ordered and will be added to the prototype once it arrives.

The next attempt will make improvements on the above and also consider:

  • Further details on controlling power to the N64
  • Use an N360 mod paired with GIMX (although another solution would be possible using the N64 controller input, my console is already modded with an N360)
  • Azure devops build agent and associated scripts
  • Switch to using a Raspberry Pi

Following on from the previous article, to add Code coverage to nopcommerce when building on Azure-Pipeline as your dev-ops pipeline, it is necessary to add the following nugets to the nopcommerce test projects:

Install Microsoft.CodeCoverage to the 4 test projects:

Microsoft.Codecoverage

Install coverlet.collector to the 4 test projects:

coverlet.collector

 

It is also necessary to adjust your build pipeline, you can do this using the following in your `azure-pipelines.yml`

pool:
name: Azure Pipelines

steps:
- task: DotNetCoreCLI@2
displayName: 'dotnet restore'
inputs:
   command: restore
   projects: ./src/NopCommerce.sln

- task: DotNetCoreCLI@2
displayName: 'dotnet build'
inputs:
   projects: ./src/NopCommerce.sln
   arguments: '--configuration $(BuildConfiguration)'

- task: DotNetCoreCLI@2
displayName: 'dotnet test'
inputs:
   command: test
   projects: |
     ./src/Tests/Nop.Core.Tests/Nop.Core.Tests.csproj
     ./src/Tests/Nop.Web.MVC.Tests/Nop.Web.MVC.Tests.csproj
     ./src/Tests/Nop.Services.Tests/Nop.Services.Tests.csproj
   arguments: '--configuration $(BuildConfiguration) --collect "XPlat Code coverage"'

- task: PublishCodeCoverageResults@1
displayName: 'Publish code coverage'
inputs:
   codeCoverageTool: Cobertura
   summaryFileLocation: '$(Agent.TempDirectory)/**/coverage.cobertura.xml'

- task: DotNetCoreCLI@2
displayName: 'dotnet publish'
inputs:
   command: publish
   arguments: '--configuration $(BuildConfiguration) --output $(Build.ArtifactStagingDirectory)'

- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact'
inputs:
   ArtifactName: '$(Parameters.ArtifactName)'

This will allow full code coverage results:

codecoverageresults

If only SonarCube was also enabled ;-p but we are not the authors, so that is for another tutorial!

So it is time to upgrade your nopcommerce install to the latest and greatest… Like me, you have given up on Travis, want to customise the source and use Azure-Pipeline as your dev-ops pipeline…

Given that you are Competent with Git and have https://github.com/marketplace/azure-pipelines enabled in your repo, it is quite simple… just create the following `azure-pipelines.yml` file in the root folder of your repo, and (subject to adding the extension to GitHub), away you go!

pool:
  name: Azure Pipelines
steps:
- task: DotNetCoreCLI@2
  displayName: 'dotnet restore'
  inputs:
    command: restore
    projects: ./src/NopCommerce.sln
- task: DotNetCoreCLI@2
  displayName: 'dotnet build'
  inputs:
    projects: ./src/NopCommerce.sln
    arguments: '--configuration $(BuildConfiguration)'
- task: DotNetCoreCLI@2
  displayName: 'dotnet test'
  inputs:
    command: test
    projects: |
     ./src/Tests/Nop.Core.Tests/Nop.Core.Tests.csproj
     ./src/Tests/Nop.Web.MVC.Tests/Nop.Web.MVC.Tests.csproj
     ./src/Tests/Nop.Services.Tests/Nop.Services.Tests.csproj
    arguments: '--configuration $(BuildConfiguration)'
- task: DotNetCoreCLI@2
  displayName: 'dotnet publish'
  inputs:
    command: publish
    arguments: '--configuration $(BuildConfiguration) --output $(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
  displayName: 'Publish Artifact'
  inputs:
    ArtifactName: '$(Parameters.ArtifactName)'

In the next part, we shall see if we can get Code Coverage working (although you might be disappointed by the results!)

If anyone is interested I have uploaded A C# library for control of XpressNet compatible model rail command stations to GitHub at https://github.com/networkfusion/XpressNetSharp Documentation/Example app is a bit light at the moment but I hope to improve it over time. But if you are any more than an ameteur developer hopefully what I have written will make sense. It was designed from scratch to be as speedy as posible with an event driven architecture and is much more complete command wise than many of the other libraries available.

I also have a full program that is capable of talking to an eLink (wont work with the out of box example due to a boot sequence check packet). I did plan to release it this year but since moving to a house that needs a lot of renovating, I haven’t had the time to improve it to a state where I think it is ready. However if anyone wants a try, PM me and I may send you a link. I spent a extrordinary amount of time working on the communications aspect as such is much less prone to USB errors than RailMaster.

I recently won a competition with the prize being an Android Wear LG GwatchR. Cool I thought, well until I found out that it is a brick without an android smartphone running android 4.3 or later. You see I have a windows phone (Lumia 1020) and I am quite happy with it, but it is getting old so thought I could possibly upgrade to a Samsung galaxy S6… as well 95% of the apps I use on my current phone are available or having a comparable app available and then some. The problem is that the camera on the 1020 is immense, coupled with the Xeon flash and the camera interface makes it near perfect (only held back by the between photo time). I also love the Qi wireless charging and have a dock in my car that allows me to be free of wires. Update: I believe the S6 will be Qi compatible… To me, these are show stoppers, well that and Googles blatant disregard for privacy!

So I thought to myself, how can I get this watch to actually do something useful… well it turned out not to be too hard…

I downloaded the latest android 4.4 image from http://www.android-x86.org/download and spun up a new VM in VMWare Workstation (although it should also work in the free VMWare Player), I then installed Android (instructions can be found linked from http://blogs.vmware.com/workstation/2014/02/experience-android-kitkat-vmware-workstation.html if needed). Don’t bother with using HyperV as it cant connect to peripheral devices which is really annoying!

I then connected through my computers built in Bluetooth to the VM (right click the Bluetooth symbols in the bottom right of the VMWare window and click ‘connect’) and setup android by following the instructions (which included creating a Gmail account).

Next it was simply a case of downloading the “Android Wear” app from the play store and turning Bluetooth on.

So I now have a watch that can tell the correct time and monitor my heart rate and steps when away from my computer and when the VM is switched on be a perfectly usable smart watch (without phone call and SMS notifications of course).

I guess the next thing will be to try and extract the APK to figure out how the protocol works (some research can be found at http://naniktolaram.com/?p=364 ) so that I can hopefully make it integrate to some level with windows phone (it already allows Bluetooth pairing) however their notification API is locked to OEMs and ‘special’ companies! However the true hope is that google will release a companion app for windows phone (never gunna happen springs to mind) but that is for another day as other projects are still higher on my priority list.

when you have created your configuration in ICE, you might want to burn it to disk, to do this, from the tools menu, hover over “Create Media” then click “Create IBW image from answer file”.

this will then create a folder with all the things needed to create a disk.

to create the disk, you can either follow the guide http://www.windowsvalley.com/how-to-create-windows-7-bootable-dvd-using-nero/

or open the windows PE command prompt and use the tool oscdimg.

e.g. oscdimg -n -bc:\WindowsEmbeddedMediaShare\BOOT\ETFSBOOT.COM C:\WindowsEmbeddedMediaShare
C:\MyEmbeddedDisk.iso

some other neat tricks can be found in the pdf: http://www.intervalzero.com/pdfs/MiniTutorial_RTX_WES7_ICE.pdf

Packages needed

1. “Enhanced Write Filter” (“FeaturePack” => “Embedded Enabling Features” => “Enhanced Write Filter”)
2. “Embedded Windows 7 Boot Environment” (“FeaturePack” => “Boot Environments” => “Embedded Windows 7 Boot Environment”)
This package contains the HORM aware boot binaries necessary to make HORM work. Do not use the native boot binaries in the peer package (“FeaturePack” => “Boot Environments” => “Windows 7 Boot Environment”)
3. Useful utilities such as shutdown.exe, regedit, diskpart etc
3a.”Power Management” (“FeaturePack” => “Management” => “Power Management”)
3b. “System Management” (“FeaturePack” => “Management” => “System Management”)

Preferrably resolve all optional dependencies as well. Build and install the image containing these packages. Follow these steps to configure HORM post install

(4) Enable hibernation
> powercfg.exe /h ON

(5) Disable false bootstat warnings
> bcdedit.exe /set {current} bootstatuspolicy ignoreallfailures

(6). Enable EWF on all partitions
> ewfmgr.exe /all /enable

(7) Restart to have the command take effect
> shutdown.exe /r /t 0

(8). Activate HORM
> ewfmgr.exe C: /activatehorm

(9). Capture the HORM state by hibernating the machine once
> shutdown.exe /h

(10) Resume the machine and start using HORM. At this point each restart should result in a resume from the state captured in the previous step

(11) If you wish to deactivate HORM
> ewfmgr.exe C: -deactivatehorm

(12) If you wish to disable ewf (disable HORM first)
> ewfmgr.exe /all /disable
followed by a restart

when trying to install an embedded standard 7 image on VMware workstation, make sure to add the SCSI driver to the image, or change the hard drive type to IDE. if you dont do this you will receive a stop 0x0000007b error when installing.

CNC Resources

OK, so I am looking into building a CNC machine.

So I thought I’d show the resources I have found so far…

CNC Resources
http://www.cnccookbook.com
http://rockcliffcnc.com
http://machsupport.com
http://buildyourcnc.com
http://www.neo7cnc.com

CNC Forums
http://www.cnczone.com/forums
http://www.mechmate.com/forums

CNC Shops
http://www.worldofcnc.com
http://www.cnc4you.co.uk

Control Systems

http://dynomotion.com/Help/
http://www.ajaxcnc.com/mach_cnc_systems.htm

Aluminum parts suppliers

http://www.hepcomotion.com/en/literature-psd-screw-driven-linear-actuator-pg-16-get-24

Particually the layout: http://www.hepcomotion.com/en/view-pg-21-view-611

Picture of a possible control box layout

Videos
http://www.youtube.com/user/momusCNCdesign#

This is a future project, so as it is here, maybe I wont forget!!!

Tag Cloud